If we know anything about the future, it’s that the number and severity of critical events will continue to increase and intensify. The signs have been all around us for decades. In 2019, there were 820 natural disasters – three times the number 30 years ago. Costs have escalated as well. The cumulative damage from hurricanes since 1987 is valued at nearly $600 billion, with almost 3,100 lives lost.

If storms were all organizations needed to worry about, that would be enough to keep business continuity and risk management teams busy. But the rate of man-made security threats also continues to rise. As of June 10, there have been 260 mass shootings in 2021 alone, which puts the U.S. on pace to double the number of shootings in 2014 (270) and equal the 610 that occurred in 2020. Further, 70 percent of active shooter incidents happen in an educational or business environment, Federal Bureau of Investigation (FBI) research shows.

Although these numbers are staggering, they’re today’s reality. For organizations to mitigate the risk of future critical events, they’ll need to raise the level of risk analysis to what can be referred to as operational resilience 2.0.

More Complexity, More Data

What makes today’s rate of chaotic events even more troublesome is the complexity of risk and organizations’ increased exposure to critical events. For instance, remote and hybrid working expanded the number of locations businesses must protect. Another lesson from 2020 is that organizations often must handle multiple critical events at once. In California, for example, government agencies and businesses responded to wildfires amid social distancing and mask mandates.

If organizations want to protect people, places and property across all locations, fast access to relevant intelligence is essential. Swift action is dependent upon getting the right information into the right hands at the right time. Yet most business continuity and risk management teams have trouble getting their hands on life-saving information in time to act. According to a recent report by Aberdeen Research, 61 percent of enterprises experienced an incident where the right information came too late to respond to a critical event.

One reason for this intolerable situation is the amount of data created, captured and replicated in any given year continues to explode. According to IDC, the annual size of the global datasphere is expected to reach 175 zetabytes by 2025.

Most importantly, 30 percent of this data will be real-time. Although immediacy is central to effective risk mitigation, analyzing data manually is a massive drain on human resources and so time-consuming that answers often come far too late.

Since the sheer volume of data available makes it impossible to use traditional manual methods to identify critical events and understand how they impact your organization, business leaders need another way to acquire vetted, actionable intelligence. So where do businesses go from here?

As the challenges outlined above prove, managing uncertainty related to your people, places or property is as difficult as it has ever been. Businesses have gone from overseeing 10 office locations to assessing the risk for thousands of employee locations – almost overnight. In the wake of sudden and uncertain risk, business leaders need better, more relevant data to achieve operational resilience. Artificial intelligence delivers the speed and accuracy analysts need to make sense of how fast-moving critical events will impact their organizations.

Making AI Work for Risk Assessment

To fully understand the complexity of this task and the full extent of AI capabilities, it’s helpful to look at how the technology works. As a CTO, I have personally seen a surge in the intersection of IT and business continuity, particularly in the last 15 months.

When threats emerge, business continuity or security operations teams need early access to actionable intelligence. They don’t have time to sift through multiple sets of reports to make a decision. They need IT leaders to step up and help them leverage the enormous volume of data to better assess their specific risk. To make AI work for risk assessment, three distinct steps need to be taken.

Step 1: Improve the data you’re using

Data is a rich source of intelligence, but to make it truly useful and relevant requires thoughtful consideration about what to gather and how best to organize it. It’s also important to understand that more data doesn’t necessarily produce useful intelligence.

In fact, an abundance of irrelevant information makes it especially challenging to find the signal amid the noise. To transform raw data into actionable intelligence requires business continuity experts to work with IT to select, clean and classify both structured (weather alerts) and unstructured (news articles) data feeds. AI and machine learning add speed and accuracy to this process. Here’s how it works:

  • Ingest. Information about emerging threats is available from thousands of sources: police scanners, CDC reports, FBI intelligence, news articles, social media, weather alerts and so much more. It’s important at this stage to think about what kind of data would be helpful and to select feeds that deliver the right information. A feed covering news events in California isn’t going to be useful for organizations that don’t have locations or customers in that state.
  • Clean. In its raw format, data won’t be quite ready to use. For instance, a news feed contains a good deal of information, but much of it may not be relevant. An article may reference past events or include an advertisement for a crime show. AI techniques such as text extraction, topic filtering and validation will remove the “noise” from structured and unstructured information, so only the pertinent information remains.
  • Classify. Classification will help you fine-tune your understanding of the situation and determine what’s actually happening. Machine learning algorithms are trained to identify critical event types based on other information present in the data source. Proper classification is crucial because it drives further processing. In a hurricane, you’ll want to know wind speed. If it’s an active shooting, you may want to learn how many shots have been fired. The power in this technology is that it’s capable of parsing, assessing and categorizing data feeds at scale.
  • Locate. On the surface, discovering the location of an event seems easy. The city may be noted in the dateline of a news story or referenced at the top of a weather alert. But the precise location is vital to understanding whether a critical event will impact your business. When looking at different data sources, it’s not always easy to discern where something is happening. There may be information about a street, but not a specific address; a source may mention the name of a building, but not the floor where the incident is unfolding. However, by scanning across different data sources reporting on that same event and identifying contextual clues, AI can pinpoint the exact location of a critical event.
  • Detect. Once we know what and where an event is happening, we need more details. Using AI, we can glean insights that help evaluate the immediacy and nature of the risk. Natural language processing and machine learning move through a two-step process to evaluate dates and times provided in sources and determine exactly when a crisis has happened. Detection also helps answer questions like: What type of gun was used in the shooting? How many bullets were fired? In what direction is the hurricane traveling? What was the speed of a train? How many people were injured?
  • Cluster. Finally, information needs to be vetted and validated, so you can have confidence in the intelligence. AI finds similarities in critical event details like location, time and event type. Multiple stories about the same event are labeled and grouped together. Triangulating data in this way allows you to understand the criticality of a piece of information. For instance, a tweet may be interesting, but until it’s vetted and backed up by another source – such as a police scanner item – you can’t have much confidence in it. AI has the ability to continually vet information even while an event unfolds.

Step 2: Correlate external data with internal data about your people, places and property

Once you understand the broad scope of a critical event, the next step is to cross-reference external findings with internal data about people, places and property. You’ll want to integrate information such as the current locations of employees, offices or trucks, employee travel itineraries, and office sign-in logs. For instance, corporate ID badge information will tell which employees are working in an impacted office and who might be visiting for the day. Security logins and IP addresses will help you identify who’s working remotely – and where.

Correlating this data will allow you to quickly identify who has been impacted and help you understand what action to take. You’ll be able to act swiftly to communicate the right message to the right groups of people. Further, it ensures business continuity. If customers are impacted, you can take steps to ensure products are delivered and services continue uninterrupted. As an example, athenahealth was able to address staffing levels as COVID-19 safety measures impacted work scheduling. Timely intelligence helped them engage select teams with relevant messages, allowing staff to adjust schedules to meet needs – especially during hurricane season when some employees were in the path of storms.

Step 3: Link your data reports to your critical communications processes

Even before intelligence is in hand, analysts need to be prepared to act. When a storm is bearing down on a key location, there won’t be time to create distribution lists or carefully worded messages and adapt them to the wide variety of formats needed for text, social media and other channels.

Three key tasks can be handled long before threats emerge and set up in your critical communications system, ready to be launched at a moment’s notice.

  1. Creating distribution lists by location, department, product line, etc. In addition to contact information, capture the delivery preferences for each recipient to increase the likelihood they’ll receive and read your messages. It’s also important to have an effective way to keep contact information up to date. One option is to work with your IT team to integrate your critical communications system with your employee directory.
  2. Preparing message templates. Though you may not know specifics about a potential event, you already have a broad outline of what you’ll say to those impacted and what instructions you’ll share. Craft these messages and upload them to your notification system now. When it’s time to use them, simply update with the specifics and send to the impacted lists.
  3. Testing your system. For emergency communications to be effective, recipients must have trust in the sender and the messages. Tests achieve two purposes: They help you spot gaps or problems in your process so you can fix them early, and they help employees recognize the messages they receive from you.

Making Operational Resilience 2.0 a Reality

Sifting through all the available data is a challenge and making sense of it all is an even more arduous task. Artificial intelligence takes your risk analysis to the next level.

But AI alone isn’t enough. Organizations must be able to pair these AI techniques with capabilities that allow you to filter and fine-tune intelligence to your unique business needs. Perhaps you want to focus on certain locations, types of events or levels of severity. Equally important, companies need to pair actionable intelligence delivered by AI with full lifecycle incident management communications. This combination makes operational resilience 2.0 a reality, empowering organizations to take control of a critical event before it overwhelms you.

Operational resilience 2.0 is all about speed, relevance and usability. Organizations that successfully harness AI will ensure they have the right information at the right time, increasing their ability to protect people, places and property. When combined with capabilities that allow you to communicate threats and respond faster, AI can help you achieve better outcomes.


Dustin Radtke

Dustin Radtke is the chief technology officer at OnSolve, a leading critical event management provider for enterprises, SMB organizations, and government entities. With more than 20 years of experience, Radtke excels at building, leading and transforming global product and technology portfolios for global businesses. In his role as chief technology officer, Radtke is responsible for setting product strategy and leading a rapidly growing global development and operations organization for OnSolve’s market-leading cloud critical event management applications. Radtke has a proven track record of bringing successful product lines to market as well as reinvigorating existing product lines to achieve their ultimate potential.

COVID-19 and the Digital Evolution: How to Create a Power Management Playbook
If the past few months have taught us anything, it’s that responding to the challenges posed by COVID-19 is going...
Home Care Professionals Help Lead Pandemic Fight
The year 2020 will be forever remembered as the most earth-changing pandemic virus since the Spanish Flu of 1918. Coronavirus...
Adopting a Culture of Hazard Mitigation
When disaster strikes, state and federal agencies rush to the aid of affected communities. There is often less enthusiasm, however,...
Risky Reliance: The Peril of Disparate Weather Data Sources in an Age of Advanced Technology and Climate Change
In the unfolding digital era where data-driven decisions and technology rule, our lingering dependence on disparate environmental data sources during...