As an industry professional, you're eligible to receive a printed copy of the journal.

Fill out your address below.






Please reset your password to access the new DRJ.com
Reset my password
Welcome aboard, !
You're all set. We've send you an email confirmation to
just to confirm you're you.

Welcome to DRJ

Already registered user? Please login here

[wpmem_form login]

Create new account
(it's completely free). Subscribe

A concept called adaptive business continuity was first introduced to readers in the Spring 2019 issue of Disaster Recovery Journal. That article explained adaptive business continuity (or merely “adaptive” from this point forward) and its 10 principles. This time around we are going to talk a bit more practically about how one might execute the adaptive framework. While familiarity with this framework may be helpful, it is not a requirement to understand some of the ideas presented here. If you are interested in learning more, the adaptive website and contact information are provided at the end of this article.

In the previous piece, it was explained that the execution of adaptive BC is relatively simple: measure, improve, and re-measure. There are many ways one can be successful by following this method. For this article, I have chosen to take one approach and dive deep into its execution. The strategy starts by measuring current capabilities. The results are then used to identify areas in which to make improvements. After implementing improvements, then fresh measurements can be used to validate the effectiveness of changes made and to identify further opportunities to perfect capabilities. This enables the business continuity professional to move quickly and adapt as needed. By jettisoning all the other activities and deliverables required of traditional methods, business continuity programs can deliver value faster, to more of the organization and with far less overhead. Let’s dive into the details associated with taking this approach.

Measuring Recoverability

Measurement starts with the RPC Model of Organizational Recoverability. This metric is predicated on the understanding that three components contribute to effective response and recovery: resources, procedures, and competencies (or just RPC). RPC postulates that a complete lack of any one of these components prevents effective recovery. Having plans (aka procedures) as well as experienced, well-trained, and practiced staff (competencies) do not provide much benefit if the resources needed to execute recovery are not available and cannot be obtained. Alternatively, having all the needed resources and the procedures to execute effectively will not provide any benefit if there is no awareness of the availability of those resources or if individuals lack the ability to execute procedures (competencies). In other words, effective recovery requires the existence of all three components. And the effectiveness of an organization’s recovery will only be as good as the weakest component.

Remember that adaptive starts with measurement. This will seem unusual to experienced practitioners as it runs contrary to traditional BC thinking. Long-standing approaches to the discipline assume that, without an existing plan, recovery capability cannot exist. RPC and adaptive, by contrast, postulate that nearly all organizations have some degree of recovery capability. Based on this, measurements should provide a transparent view of what capabilities exist while also helping to identify where steps can be taken to improve on them.

So how does one go about measuring their resources, procedures, and competencies? Are they specific, known items or more conceptual? What methods might one employ, and who would the audience be for establishing and collecting such metrics? Let’s start by identifying what to measure. In other words, how do we make an objective determination of the existence of resources, procedures, and competencies? Many answers exist, but practically speaking, there are two main lines of approach.

What to Measure

The first option is identifying and measuring known items that contribute to recovery capability. For resources, this can include such things as laptops, printers, power cords, flashlights, tools, and even software. The options are nearly limitless, but each organization is going have specific items that are needed to perform services whether it is vehicles for transportation companies, plastic mold injectors for manufacturers, or medical devices for healthcare companies. Procedures can naturally include plans, but adaptive does not directly associate recovery capability with the existence of continuity plans. This could also include things like procedural checklists, manual logs, call lists, printed inventories, and policies. In other words, anything written, printed, or even electronic that aids in the execution of response and recovery activities. Competencies consist of specific skills, knowledge, or experience that aids team members in being more effective when disruptions occur. This might include recent participation in exercises or experience actually responding to an event. But this also covers things like familiarity with connecting to the company’s VPN or being able to get Internet connectivity via a smartphone hotspot. It might be specific and verifiable certification such as forklift operation, commercial vehicle licensing, or medical equipment use.

The difficulty with this approach is that these specifics may vary from service to service. For instance, the means of recovering the manufacturing environment will differ drastically from what may be needed to recover payroll services. While some commonality may exist, the more specific the data being collected, the more likely the data collection will need to be tailored to each service, even within the same organization.

The second option is to determine the existence of resources, procedures, and competencies as a percentage or scale in comparison to what would be needed in an ideal situation. This provides for more generic terminology as well as more consistent applicability across services. Consider this as simply asking team members about what percentage of resources exist or could be easily accessible in the event of a loss. Naturally, this line of inquiry could be made more granular while still remaining relatively generic. Resource data could be broken up to focus on the existence of technical systems, emergency supplies, physical assets, and so forth. Data collection could also be divided by different phases of event or incident response such as alerting and communications, staff mobilization, recovery strategies, resumption, etc. The same approach can be similarly applied to data related to procedures such as forms, online or locally stored plans or policies, and printed materials. Following the same line for competencies, one might break up team member questions by role, such as leadership competencies, strategy execution, communications capabilities, and even emergency or incident response experience.

Done the right way, this enables a more general approach to data collection that can be applied equally to all services. The time saved on the front end, however, would have to be made up when looking at improvement. Knowing a service lacks a significant percentage of resources is helpful, but one will need to identify what those items are in order to make any improvement. Similarly, improvements to procedures and competencies require additional time and effort in order to identify the specifics before any improvements can be made.

It is most likely that some combination of the two may be most advantageous. Consider that a different line of questions could be used for customer-facing or operational services as compared to support functions like finance or human resources. Large, multi-faceted organizations might have to take a closer look at what works best and identify a balance between the total number of different surveys or questionnaires that may be needed as compared with the effort that may be needed to identify improvement activities.

How to Measure

Now that the concept of what to measure is clear, let’s dive into the how of data collection. There are several options, each with its own benefits and drawbacks. Some of these align better with regard to the specific data captured that was mentioned earlier. Like anything, it might be some combination of two or more of these approaches works best.

First, many organizations already have tools used to store the information previously mentioned. These systems might house data pertaining to assets and resources, documentation, policies, knowledge repositories, training modules and their completion rates, or exercise and event data. If one was to follow the route of gathering very specific information about the components needed for recovery, this might be the best way to start. The existence of such tools will make decisions on whether to follow this approach much more straightforward. Depending on the maturity of existing processes, some of this data may already be aligned to locations and services, making the aggregation and consolidation of data much easier.

Another option is to question individuals about their awareness or understanding of specific components of recoverability. A wide variety of tools exist that enable the creation of custom surveys. Even without the ability to send online-type surveys, the creative practitioner could develop a data repository and gather responses verbally or via e-mail. This approach can be used regardless of what type of RPC data (specific or generic) the practitioner decides to use. Remember, if one pursues a very specific line of questioning, then surveys may have to be tailored to specific services and their respective recipients. A very general approach to measurement might enable the use of a single survey for all audiences. The benefit of this approach is it easily supports either flavor of data collection or even a hybrid.

A third option is the use of group discussions to collect data. Bringing people together in groups provides multiple benefits. First, it ensures a consistent understanding and builds consensus as to the components in place. This directly improves capabilities by informing participants who were previously unaware of the availability of resources. Conversely, some might learn of a lack of resources or other components that they otherwise assumed were available. The second benefit of this approach is that it reduces the volume of data that might otherwise require collection and storage from multiple individuals. Lastly, discussions of this nature can also be used to serve multiple purposes – and not just the collection of capability data. Such meetings could be used to brainstorm improvement opportunities, discuss planning or procedure development, and even build competencies through training and practice. This, however, may be the least practical option because it requires scheduling and pre-planning with the risk of missing key contributors that require later follow-up.

Some combination of the above could provide a form of checks and balances to maximize data quality. For instance, one might start with data that is already housed in existing systems. Once aligned to the proper services or locations, one could use a survey or group discussion to validate individual awareness and assumptions. Similarly, surveys could be sent to all recipients with the data validated and outliers discussed in a group setting. Given that adaptive encourages creative solutions, it is possible one could combine these efforts with other beneficial activities. Exercises, for example, could provide a perfect forum for conducting group discussions or confirming data collected with what would be assumed during the activity.

Next Steps

Where traditional approaches to business continuity define objectives which must be met, adaptive takes the opposite approach. By measuring an organization’s current capability, proper expectations can be established up-front and the means for improvement better defined. This eliminates a key problem with legacy approaches: the burden of having to meet established goals or risk failure. Business continuity practitioners and the teams they support are pressured to deliver on the strict time-based requirements they have established. This creates an incentive to meet the objective by any means necessary, which runs counter to the goal of finding and addressing genuine problems.

Turning this approach on its head enables the practitioner to set clear expectations with leadership about actual capability. Better still, it is not based on a subjective evaluation performed by the business continuity manager or team. With these metrics, leaders can be provided with an honest picture based on data that already exists or is provided by individuals tasked with the execution of recovery. This greatly reduces the risk of failure and the overwhelming pressure to deliver on potentially impossible objectives. Knowing existing capabilities enables informed decision-making and creates a better understanding of the actions and investment needed to improve.

Better still, this data can be used to identify steps that can be taken to improve. Some actions may require capital investment, such as the procurement of resources, while others may require time and effort, such as the development of procedures or the delivery of training. By illustrating the capability and breaking it down into its component parts, the decision makers within the organization can determine what action to take based on available capital and resources as well as the anticipated improvement to recoverability. Where legacy practices must use data collected in the past in order to make decisions about future effort, this approach enables leadership to make decisions on where to invest and take action based on the future strategy of the organization.

It is important to make a distinction here. Traditional practices define a set of actions with the belief that their execution will deliver the capability expected from the business continuity process. This results in measuring effort only and using compliance as the driver for business continuity activity. Adaptive advocates a completely different tactic. Measures of capability should not be delivered with a mandate to reach a specific level but, instead, as a means for leadership to determine where limited resources should be applied. The metrics should show a clear and honest picture with which the BC professional can have a meaningful conversation with their executive team members about the reality of BC and the limited resources that exist to ensure complete recoverability in all circumstances. This provides senior management with the tools to determine for themselves where they wish the BC program to focus in order to deliver improvement.

The Way Forward

Hopefully, this article answers some of the lingering questions about how one might execute adaptive business continuity principles. This may take some time and thought to absorb. It might even require some degree of unlearning. Experienced BC professionals, in particular, might have the hardest time envisioning a practice that does not first seek to prioritize and scope efforts. Adaptive BC sets out a means whereby support can be provided to all parts of the organization because they all contribute to the delivery of improvement. It also eliminates the errant approach in use today in which the BC practitioner is tasked with defining the program’s priorities. The RPC model, instead, puts that decision squarely in the hands of leadership while providing reliable data to help make that determination.

This article is not meant to be comprehensive, and practitioners are encouraged to learn more and dive deeper into the adaptive approach. Legacy practices have been with us since business continuity was first conceived and have barely changed in the decades since. Adaptive represents the single greatest change to our profession that nearly any of us have seen (or perhaps will see). It also provides the greatest opportunity to move our discipline forward.

You are encouraged to learn more at www.adaptivebcp.org or to reach out directly with thoughts or input by e-mailing info@adaptivebcp.org.

Armour Mark optMark Armour is a business continuity professional with more than 16 years of experience. He is currently the director of global business continuity for Brink’s Inc., the worldwide leader in cash management services and secure logistics. He is a member of the International Benchmarking Advisory Board for BC Management and co-author of the book, “Adaptive Business Continuity: A New Approach.” He can be reached at mnjarmour@gmail.com.

 

February 3, 2021 – Using Mass Notification to Accomplish Your 2021 Business Continuity Goals

WATCH NOW

February 17, 2021 – Is your BIA effective? Or are you using it ineffectively? How 2020 Changed My View on “Traditional” Business Continuity

WATCH NOW

February 24, 2021 – Evolving Employee Safety for the Anywhere Worker

WATCH NOW

Ask the Executive: An Interview with Rob van den Eijnden of Royal Philips
Rob van den Eijnden, MBCI, CISA, CRISC, Lead Auditor ISO 22301, is the global business continuity and resilience leader for...
READ MORE
Reviewing Your Regulatory Requirements
Your organization’s regulatory framework consists of the set of government-developed laws, rules, regulations, decrees, and policies with which your sector...
READ MORE
Creating a People-Centric Business Continuity Plan
If global organizations didn’t acknowledge the importance of strategic business resilience planning before 2020, they certainly do now. From food...
READ MORE
Ask the Executive: An Interview with Melanie Lucht of Carnegie Mellon University
Melanie Lucht, MBCP, MBCI, CIC, CCM is the associate vice president and chief risk officer at Carnegie Mellon University. She...
READ MORE