As an industry professional, you're eligible to receive a printed copy of the journal.

Fill out your address below.

Please reset your password to access the new
Reset my password
Welcome aboard, !
You're all set. We've send you an email confirmation to
just to confirm you're you.

Welcome to DRJ

Already registered user? Please login here

[wpmem_form login]

Create new account
(it's completely free). Subscribe

Have you ever been to a meeting where the presenter displays a slide packed with percentiles, bar charts, and awkwardly colored stoplight diagrams? Did the confusing, overly complex, and poorly designed metrics distract you from the intent of the meeting? Did the experience leave you just trying to figure out the difference between orange and yellow on that stoplight diagram – causing you to completely miss what the presenter was actually trying to convey? Even worse, were the metrics failing to tell the real story or answer the performance-related question you were really hoping to get answered?

You’re not alone.

Ineffective metrics can harm rather than help if they are not developed to tell a story and communicate the right information to the right people. Instead, it’s critical to choose the right measurements that answer two high-level, core questions: “Are we doing the right things to prepare?” and “Are we really prepared?” If you only give a cursory thought to metrics, you’ll experience an all-to-common result … lower engagement from all stakeholders. Luckily, this is avoidable if you invest the time up front to create metrics that matter and use them to drive action within your program.

Why Measure?

Why should you invest time into analyzing planning activities and response/recovery capabilities? For the business continuity or IT disaster recovery program manager, six key reasons often exist:

Answer fundamental questions about the program performance. Metrics help answer fundamental questions, such as “Can we recover?” At their core, metrics help tell a story and deliver meaningful insight to those seeking to understand why the program exists and what capabilities are in place.

  1. Affect change. To effectively communicate the need to alter a program’s current course of action, a program manager must have the measurables to justify their position. This is especially true if that change requires an increase in resources. The ability to highlight a measure of less-than-optimal performance and drill down into the root cause is paramount in initiating, leading, and managing change.
  2. Drive continual improvement. Effective metrics don’t just tell a story, they also paint a picture for the future. Metrics provide a window into the organization’s existing shortfalls and gaps and can be used to highlight those areas that either have been improved or need to be improved.
  3. Drive action. “What gets measured gets done.” While the origin of this quote is debated, what isn’t disputed is the truth behind this statement. Good metrics help paint a picture of what work has been done and what still needs to be done.
  4. Show maturity over time. As stated previously, metrics should tell a story. Metrics should be used to show a trend of gradual or incremental improvement and display the value of continued engagement on behalf of stakeholders.
  5. It’s a requirement. For many industries, business continuity is a requirement (e.g., FFIEC) and, increasingly, organizations choose to align to a standard (e.g., ISO 22301). With increasing visibility of disruptive events in the news and social media, it’s also a common customer requirement that an organization maintain a business continuity program. To meet these requirements, regardless of the driver, the proof will be in the metrics.
What Makes a Metric ‘Good?’

Now that we understand the “why” behind metrics, let’s look at what makes a metric “good.” First and foremost, a good metric should be appropriate for the audience to which it’s being delivered. Put yourself in the recipient’s shoes. What do they want to know and how can you give them that information in a way they can understand and action?

Once you have the right information for the right audience, the major obstacle is how to most effectively present it. The following provides a list of considerations for making a metric effective:

  • Quantify whenever possible, qualify if you must. Qualified metrics are “squishy” and can be left open to interpretation. Quantified metrics remove subjectivity and enable more direct discussions.
  • Use the language of the business. Communication among different disciplines can lose its effectiveness if too much of it is tied up in jargon. Look for common terms that everyone will understand and communication techniques that are in place in other areas of the business.
  • Measure based on the current state but consider measuring based on what’s going to be important in the future.
  • Be clear and concise and show trends whenever possible.
  • Where metrics fail to measure up to expectations, enable a discussion to determine why and what can be done to correct the performance issue.
Types of Metrics

Let’s examine different types of metrics. Quality metrics for a business continuity program can be divided into one of three categories: activity and compliance metrics, product and service metrics, and goal-oriented metrics.

Activity and Compliance Metrics

Often referred to as key performance indicators (KPIs), activity metrics track the execution of key deliverables in the business continuity lifecycle (e.g. BIAs, plans, exercises). Often times, regulatory or customer requirements outline the need for specific program elements. By completing these activities and tracking their progress, the program manager is able to monitor and report on the organization’s progress towards achieving compliance with the aforementioned requirements. These items also allow other stakeholders to understand what work has been performed to date and what still needs to be executed in comparison to requirements, both internal and external.

Activity and compliance metrics should answer the question, “Is everyone involved in the program doing what they are supposed to?” Here are some examples of activity and compliance metrics:

  • Number of BIAs completed/updated compared to total expected;
  • Number of supplier business continuity assessments completed compared to the total number of “critical” suppliers; and
  • Number of tabletop exercises conducted, and post-exercise reports produced and approved.

Product and Service Metrics

Often referred to as key risk indicators (KRIs), product and service metrics measure the ability of the organization’s current-state capabilities to meet continuity requirements. These metrics allow the program manager and other stakeholders to identify where gaps exist in the organization’s ability to recover products, services, and their associated process within stated downtime tolerances.

Product and service metrics should answer the question, “Is what we are doing effective, and can we recover in-line with our risk tolerance and stakeholder expectations?” The following are examples of product and service metrics:

  • Product or service recovery capability measured against leadership’s stated downtime tolerance; and
  • Business “process” or “activity” recovery capability measured against the requested recovery objective, while identifying potential gaps.

When product and service metrics are tied to the organization’s risk tolerance, it can provide a powerful answer to the question: “Can we recover?” that resonates with senior management.

Goal-Oriented Metrics

Goal-oriented metrics measure the performance of the program with a set of preset goals, ideally classified into short, medium, and long-term timeframes. These metrics are intended to drive continual improvement in the program by setting targets. Goal-oriented metrics help protect against program stagnation caused by simply executing methodology year in and year out. Goal-oriented metrics should answer the question, “Are we improving the effectiveness of our program?” Examples of goal-oriented metrics include:

  • Close the top-20 corrective actions this quarter;
  • 90 percent of employees complete business continuity awareness training this year; and
  • Improvement on a maturity model or conformance assessment.
How to Develop Metrics that Matter

Now that we understand the “why” and the “what,” it’s time to start building a scorecard. Below is a step-by-step process that can help you create and track metrics that matter. As you work through the steps below, remember that a key consideration for developing good metrics is understanding your audience. By understanding the “who,” you can better build our metrics to fuel engagement.

  1. Brainstorm activity and compliance metrics. Build a list of potential performance measures and then work to narrow it down.
  2. Document product and service metrics. Chances are you’ve already engaged leadership to understand the organization’s priorities and gauge their risk appetite. Use this background information to document these priorities, along with maximum downtimes. Think through whether you could answer “yes or no” regarding the ability to recover each product or service.
  3. Include compliance requirements. There may be some requirements that are mandated (e.g., regulatory, contractual, self-imposed). Make sure these are captured in the performance measures you’ve developed.
  4. Map metrics to stakeholders. Once you have a good list of running metrics, the next step is to map them to relevant, interested parties and make sure they remain relevant. This step should allow you to refine, add, or eliminate metrics as needed.
  5. Identify how to collect and track data. The next step is to identify how to collect the requisite information and where to track progress. Many business continuity management software tools can track most activity or product and service metrics, but there may be some information that needs to be recorded manually. If you’re not using a business continuity management tool, you may have to create an excel spreadsheet or database to manage this information.
  6. Figure out when and how to deliver. Once you have figured out what to track, how to track it, and who to deliver it to, the next step is to make sure that you deliver metrics in the appropriate forum at the right frequency. Map your new metrics to recurring meetings you have scheduled with your sponsor and team, steering committee, and recovery team owners in the business.
  7. Identify ways to measure progress over time. As a program matures, you will need to show improvement over time or relative changes in program maturity. Some examples of this include measuring adherence to a standard over time, identifying improvements resulting from audits, or using a maturity model to show gradual growth.
Pitfalls to Avoid

Ineffective metrics can cause more harm than good if they are not deliberately developed to fuel engagement. The following is a list of the most common pitfalls to avoid when developing metrics that matter.

  1. Focusing Solely on KPIs. KPIs are an important aspect of every scorecard, but KPIs in isolation don’t tell a complete story. It’s important to understand how many business continuity plans have been completed and approved, or exercises conducted, but those numbers don’t inform stakeholders about the organization’s response and recovery capabilities or the increased levels of resilience that the business continuity program delivers. This, in turn, can lead to decreased interest and waning engagement from stakeholders.
  2. Delivering metrics at the wrong time to the wrong people. This is one of the most common issues business continuity program managers face. One size does not fit all for stakeholders and metrics. The metrics you deliver to a customer will be different than those you deliver to a regulator, which will also be different than those you deliver to a senior executive. Being able to package metrics in a way that communicates effectively to different stakeholders is what will drive action continual improvement.
  3. Presenting metrics without a root cause analysis. If a metric fails to measure up to expectations, not being prepared to discuss the root cause or tie it to a recommended corrective action will diminish engagement. It will lead stakeholders to question why we measure.
  4. Metrics as an end rather than a means to an end. Metrics are a bridge to continual improvement, not the other way around. Actions and change should be the end state of developing meaningful metrics.

Ineffective metrics can harm rather than help if they are not developed to tell a story and communicate the right information to the right people. Instead, it’s critical to choose the right measurements that answer two high-level, core questions: “Are we doing the right things to prepare?” and “Are we really prepared?”

You can follow all of the guidelines and steps in this article, but how do you know if what you’ve developed is “good?” Examine your scorecard and then ask yourself the following questions:

  1. Are your recovery capabilities consistent with business continuity targets and requirements?
  2. Are you doing the things you’re supposed to do to prepare?
  3. Do your interested parties feel informed to help drive continual improvement?

If what you’ve developed can answer those three questions, then chances are you’ve developed metrics that matter that will, in turn, drive engagement across stakeholders.

February 3, 2021 – Using Mass Notification to Accomplish Your 2021 Business Continuity Goals


February 17, 2021 – Is your BIA effective? Or are you using it ineffectively? How 2020 Changed My View on “Traditional” Business Continuity


February 24, 2021 – Evolving Employee Safety for the Anywhere Worker




Nathan Early is a consultant at Avalution Consulting. Early contributed to the development of Avalution’s Business Continuity Operating SystemTM and has experience building and managing first-class business continuity programs for numerous organizations across a variety of sectors, including life sciences, financial services, and manufacturing. Prior to joining Avalution, Early served in the U.S. Army, specializing in operational leadership and strategic development initiatives. Early can be reached at Michael Bratton is the consulting practice leader at Avalution Consulting. Bratton has experience implementing business continuity programs for clients of all sizes and across almost all industries. He has presented at industry conferences and continues to play a major role in the development of business continuity best practices, including the development of Avalution’s Business Continuity Operating SystemTM. Prior to working at Avalution, Bratton served in the U.S. Army as a communications officer, specializing in telecommunications, information management, and contingency planning. Bratton can be reached at

Puppy Love: Have Your Executives Fallen Hard for Business Continuity?

Once, long ago, I was on a date with an attractive young woman and so enamored with her that I began to plan out when we could see each other again … while still on the date. That day we planned several more dates, and within months, we were married.

Has anyone ever felt this way about your business continuity program? Have your executives felt the pitter-patter in their chest whenever your program is mentioned? Have they casually ever walked by your office hoping to bump into you? Or planned the next meeting with you within a few days after your last one?

No? Me either.

It’s You, Not Them

It’s almost repetitive. The first year they loved you (or tolerated you) and then slowly, they started to pull away. Soon, you don’t even talk anymore. Ok, I’m being dramatic. But has this happened to you before? Maybe it really isn’t them. Maybe it is … us?

There are currently many articles out there which have been critical of the traditional BIA to BC plan model. Articles such as David Lindstedt’s “BCP is Broken” ( outline some key reasons why this approach should be revised. 

Highlights suggest for the most part that BC hasn’t evolved much over the years and that we’re not able to prove our worth very easily. Rather than being the object of an executive’s affections, we haven’t engaged them in the program and created brand value.

I think there might be a way to change the way organizational leaders look at BC. But first, let’s examine the traditional “BIA to plan” cycle so we can figure out where things can go wrong.

Same Thing Different Day

Immediately upon getting the keys to my new business continuity program, I set upon the traditional path of developing a program. Plan. Do. Check. Act.

In the traditional sense, this means you identify the organization’s critical processes and dependencies, then create the proposed recovery strategies you’d most likely implement if that process fails. And then you test to see if there are any gaps.

Of course everyone’s program and culture are going to be slightly different. But tell me if the following steps sound familiar: 

  • Interview key leaders to determine their priorities. Do it whenever they will meet with you, which will be at 1 a.m. on a Saturday.
  • Create a BC steering committee filled with people that show up only after you beg them or after you wait by their car each evening after work.
  • Create a program policy and general risk assessment in a vacuum. Constantly remind people we have a policy.
  • Convince your executive a BC software tool will be helpful. Go through a rigorous process to find just the right one.
  • Implement selected BC tool and become the only primary user. Prepare to hold laptops up to users’ faces and tell them what to type when they finally log in.
  • Make every work unit in the entire company complete a business impact analysis (BIA). Explain what BIA means 4,000 times. Review this information for weeks and then try to get anyone who will listen to look at it with you. (Spoiler alert: No one really will.)
  • Create recovery strategies for each of the 10 risk scenarios you’ve identified ... by yourself.
  • Reset everyone’s password to the BC software tool 37 times a piece.
  • Write plans and checklists and arrange exercises to test the people who haven’t read the plan or checklists.
  • Realize it’s been a year and think about why you’re doing this work and where it all went wrong.
  • Repeat.
Where Did It All Go Wrong?

At the end of this traditional model you end up with a few very positive things. You will have process mapping data. You will have recovery time objectives (RTO). You will also begin to create BC plans for a specific scenario. And you possibly can make the case, that there is some understanding of BC practices amongst a large segment of your organization. But you also end up being about six to 12 months down the road.

And unless your business has a super emergency after this point, and every single plan is activated all together, it will be very difficult to convince people that this is all worth it.

I think there are three main factors why this cycle can be a bust and unsustainable.

Too time consuming.

This process can easily consume a year of your time, especially if you’re a one- or two-person operation. By the time you set up everything, conduct a BIA, develop recovery strategies, and get a physical printed plan to show off, it can be close to a year. What are you going to do if you have a business disruption in the first two months after you start? And how long can you expect anyone to wait before you can demonstrate that you made the business more resilient?

No undeniable value

It’s already very difficult to demonstrate value around a support function like business continuity. So, if you’re spending all your time sucking up money (time and tools for BC) and you haven’t made anyone really aware of the benefits of BC, you will end up with the classic question: “What does BC do again?” The truly “all-in” organization should be able to speak to one or two reasons why you’re there. And in my experience, plans aren’t the thing that makes people love your program. 

Lack of clear wins

Just because you have a documented plan, how does that translate into reduced risk? The traditional BIA may help you understand key information like critical processes and your RTO. And maybe during the effort, you find some areas of concern. But did you fix them during the cycle? This model doesn’t tend to yield very specific results or give you the ability to eliminate a single point of failure right away. You will need to dive deeper to accomplish that.

In my next article, I’m going to explain how thinking about a BC program in the same way you approach a new romantic relationship can bring completely different results. By making changes at the front end of the BC cycle, when you first meet and begin establishing your relationship, you can really change the way you generate interest and set yourself up for a relationship no one could resist. 

"MathewShane Mathew, vice president of professional services at Virtual Corporation, oversees the consulting and software implementation. Prior to joining Virtual Corporation, Mathew served in various leadership roles within business continuity and emergency management in both healthcare and governmental organizations. Mathew has led the creation and implementation of business resiliency and risk identification programs for several organizations, including governmental, medical centers, and a multi-site, national pharmaceutical division of a global healthcare organization.


To Invoke or Not to Invoke – Is It A Matter of Complexity, Confidence, Or Control?
Companies across all industries and verticals have an ever-increasing dependence on technology. Secure access and robust, minute-to-minute performance from IT...
What Is Your Monday Morning Plan?
As much of America and the world begins to shut down, what is your plan? Yesterday the CDC banned gatherings...
What is Power Management? And Why Does It Matter?
How proper power management can cut costs and add peace of mind Peace of mind is something we’re all striving...