Fall World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 30, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (691)

Nearly every day you read about a new malicious attack on computer networks of vital businesses around the world, and the attacks do not seem to be slowing down. 

According to reports, malware volume skyrocketed in 2016--more than 800 percent when compared to 2015--and that number coninues to rise.

The most recent attack, WannaCry, targeted computers running the Microsoft Windows operating system by encrypting data and demanding ransom payments in the Bitcoin currency. The attack reportedly locked hundreds of thousands of computers in more than 150 countries, and demanded a $300 payment to restore the encrypted files.



5 Key Changes on the Way

Although nearly a year away, the EU’s new General Data Protection Regulation (GDPR) is fast-approaching for multinational companies, and the clock is ticking to ensure compliance. The changes coming will have far-reaching implications for global businesses: any company operating in the EU must comply or face steep financial penalties.

It’s hard to believe that we’re now less than one year out from the implementation of a major change to data protection laws in Europe: The General Data Protection Regulation, or GDPR.  It is the result of four years’ work by the European Union (EU) to standardize privacy laws and protect residents of the EU from the misuse of their personal data and data breaches in an increasingly digital world.

Most of the personal data protection laws in the EU haven’t been updated since the 1995 Data Protection Directive. In 1995, only one percent of the European population was using the internet. Now, not only is the majority of the global economy digital, but many companies are operating globally and processing personal data across borders. The EU Parliament established the GDPR framework as a way to update and harmonize the laws specific to the usage of millions of individuals’ data.



Monday, 24 July 2017 15:15

What You Need to Know about GDPR

For retailers, the specter of big data is one that is constantly looming. Companies are working hard delving into the omni-channel arms race as they try to fend off behemoths like Amazon. Some companies are going so far as to deploy massive amounts of resources into developing their own big data solutions in an attempt to go toe-to-toe with the retail giant.

The natural question that retailers face is what exactly they need to build in-house vs. what they can, and probably should, outsource to vendors.

With the proliferation of the software-as-a-service (SaaS) model, it’s becoming increasingly simpler and faster to deploy new solutions in an enterprise setting. This naturally results in ever-increasing innovation in the industry, as old solutions are easily replaced with the more novel and more effective ones in mere weeks.



The Business Continuity Institute

Global economic losses resulting from natural disasters during the first half of 2017 were estimated at US$53 billion – 56% lower than the 10-year average of US$122 billion, and 39% lower than the 17-year average of US$87 billion. This is according to Aon Benfield's Global Catastrophe Recap: First Half of 2017 Report. Meanwhile, insured losses were preliminarily estimated at US$22 billion – 35% lower than the 10-year average of US$34 billion, and 12% lower than the 17-year average of US$25 billion.

According to the report, the severe convective storm peril was the costliest disaster type on an economic basis (nearly US$26 billion) during the first half of 2017, comprising 48% of the loss total. The majority of these losses (US$23 billion) were attributable to events in the United States. These types of events also caused the majority of insurance losses (US$17+ billion), comprising 78% of the loss total, and with nearly US$16 billion attributable to widespread hail, damaging straight-line winds, and tornadoes in the US.

Natural disasters claimed at least 2,782 lives during the first half of 2017, the lowest figure since 1986 and significantly below the long-term (1980-2016) average of 40,867. Flooding was the deadliest peril during the period, being responsible for at least 1,806 deaths.

Steve Bowen, Impact Forecasting director and meteorologist, said: "The financial toll from natural catastrophe events during the first six months of 2017 may not have been historic, but it was enough to lead to challenges for governments and the insurance industry around the world. This was especially true in the United States after the insurance industry faced its second-costliest first half on record following a relentless six months of hail-driven severe weather damage. In fact, nearly eight out of ten monetary insurance payouts for global disasters were related to the severe convective storm peril. Other events – such as Cyclone Debbie in Australia, flooding in China and Peru, wildfires in South Africa, and a series of windstorms in Europe – led to notable economic damage costs. As we enter the second half of the year, much of the focus will be on whether an El Niño officially develops. Such an event could have a prominent influence on weather patterns and associated disaster risks."

The report highlights that the US recorded 76% of the global losses sustained by public and private insurance entities during the first half of 2017, while EMEA (Europe, Middle East and Africa) and Asia-Pacific (APAC) each accounted for 10%.

Around 42% of the global economic losses during this time period were covered by insurance, above both the near- and medium-term average of 32% and due to the fact that the majority of losses occurred in the US However, insurance take-up rates continued to grow in other areas, notably Asia-Pacific (APAC) and the Americas.

Adverse weather has consistently been a top ten threat for business continuity and resilience professionals, according to the Business Continuity Institute’s annual Horizon Scan Report. In the latest edition, more than half of respondents to a global survey expressed concern about the prospect of this type of disruptive event materialising. When you analyse the results further to only include respondents from countries where these types of events are relatively frequent, countries such as the United States, the level of concern increases considerably.

The Business Continuity Institute

IT professionals believe that compliance and regulation and the unpredictable behaviour of employees will have the biggest impact on data security, according to a survey commissioned by HANDD Business Solutions.

The UK study found that 21% of respondents say regulations, legislation and compliance will be one of the two greatest business challenges to impact data security. The General Data Protection Regulation (GDPR) is causing real concern among professionals in their bid to be compliant by the deadline in less than 12 months. GDPR will not only raise the privacy bar for companies across the EU, but will also impose extra data protection burdens on them.

HANDD CEO and Co-Founder, Ian Davin, commented: “Companies must change their mindset and look at data, not as a fungible commodity, but as a valuable asset. Data is more valuable than a pot of gold, which puts companies in a challenging position as the stewards of that data. C-suite executives must understand the data protection challenges they face and implement a considered plan and methodical approach to protecting sensitive data.”

Worryingly, 41% of those surveyed assign the same level of security resources and spend for all company data, regardless of its importance. Analysing and documenting the characteristics of each data item is a vital part of its journey through an organization. A robust data classification system will see all data tagged with markers defining useful attributes, such as sensitivity level or a retention requirement and ensuring that an organization understands completely which data requires greater levels of protection.

While 43% of those surveyed think that employees are an organization’s greatest asset, more than a fifth (21%) believe that the behaviour of employees and their reactions to social engineering attacks, which can trick them into sharing user credentials and sensitive data, also poses a big challenge to data security.

Danny Maher, CTO at HANDD, commented: “Employees are probably your biggest asset, yet they are also your weakest link, and so raising user awareness and improving security consciousness are hugely important for companies that want to drive a culture of security throughout their organization.”

Storage is also a key problem area, with more than a third (35%) citing that ensuring data is stored securely, and whether it's on premise or in the cloud, as their biggest challenge and most likely to keep them awake at night. A data record’s classification will enable a company to make these decisions, automatically and definitively dictating its location and whether an encryption policy should apply.

Having stored data to comply with its security policy, an organization must ensure that an access management system is in place, which understands roles and responsibilities and allows users to see only the information that they need. In HANDD’s survey, less than half (45%) of IT professionals are confident that they have an identity access management process in place which dictates that users must have different privileges depending on their roles and responsibilities, while 15% have no access management system in place at all.

Data breaches, and the disruptive impact they can have on an organization, are the second greatest concern for business continuity and resilience professionals, according to the Business Continuity Institute's latest Horizon Scan Report. 81% of respondents to a global survey expressed concern about the prospect of a breach occurring, making it essential that organizations have mechanisms in place to reduce the chances of a breach occurring, and also have plans in place to respond to such an incident and help lessen its impact.

As large organizations continue to downsize and startups and SMBs look to make every IT dollar stretch, desktop as a service (DaaS) is set to take off. With some researchers forecasting 28.7 percent CAGR for DaaS, managed service providers (MSPs) should take a look at channel programs in this area of the market as it makes inroads into legacy enterprises. Many startups are already familiar with the Google suite of desktop applications, but other alternatives exist in the market, some of them more competitively priced and with better performance characteristics that would have more appeal to the traditional desktop market.

What do MSPs need to know about reselling these cloud apps to their customers? And what objections must they overcome when seeking to displace the gold standard Microsoft Office on-premises enterprise suite? Let’s look at how some other cloud office groupware stack up.



(TNS) - Lake County, Ill., Officials are warning residents who've been fighting floodwaters for more than week now that the fight isn't over yet.

"If you've sandbagged, don't take those out yet," said Mike Warner, executive director of the Lake County Stormwater Management Commission. "Let's get past the next rainfall and think about taking them out next week."

The National Weather Service told county officials they could get a range of 1 to 3 inches of rain through this weekend, with some areas hit with strong rains Wednesday night and into Thursday morning. The Des Plaines River could handle 1 inch without a problem, but 3 inches could spell more woes for nearby buildings and streets.



Our earlier post Working with nature to build resilience to hurricanes discussed how insurers look to natural infrastructure like coastal wetlands and mangrove swamps to mitigate storm losses.

The Mesoamerican Reef, which runs south for some 700 miles from the tip of the Yucatán Peninsula protects coastal communities and property by reducing  the force of storms, but its corals require continued repairs.

For every meter of height the reef loses, the potential economic damage from a major hurricane triples, according to The Nature Conservancy (TNC).



In business continuity management, should you start with what you want or with what you have? While business continuity is frequently a goal-driven activity, there is a contrarian point of view that says, “improve on what you have, rather than aiming for something you don’t have”.

Is either point of view superior to other? If so, which one should you choose?

There are “for and against” arguments to be made in both cases. In the objectives-driven case, you know where you want your organisation to be, and therefore anything that diverges from that happy state is an issue to be resolved. This assumes that you also have realistic, relevant goals, and ways of measuring how well you achieve them.



We’ve mentioned multiple times that implementing a BCM program can be challenging and at times painful. No one likes to point out their business’s vulnerabilities. Many times the investment of time and dollars to do just that can feel like a burden. We’ve seen our clients struggle with this during the implementation and maintenance of their programs. Many times the ongoing investment can be even more difficult. It helps to identify and assess both the tangible and intangible benefits of your initial and continuing investment in the BCM program. Identifying the benefits of a business continuity program helps you define benchmarks and see the light a the end of the proverbial BCM tunnel. We’ll take a look at the more commonly known benefits of a business continuity program. Then, we’ll walk you through some benefits you might not have thought of.



The Business Continuity Institute

An earthquake reaching a magnitude of 6.7 on the Richter Scale has hit the Aegean Sea between the Greek island of Kos and the Turkish resort of Bodrum. The earthquake, with its epicentre at a depth of about 10k according to the US Geological Survey, struck at 01:31 local time on Friday, and has reportedly killed two people and left hundreds of others injured.

Turkey’s Disaster and Emergency Management Presidency has reported at least 20 aftershocks since the initial earthquake, and at least five of those registered over 4.0, with the largest reaching 4.6.

According to the US Geological Survey, a earthquake of this magnitude (6.0-6.9 on the Richter Scale, classed as strong) can cause damage to a moderate number of well-built structures in populated areas, but earthquake-resistant structures should survive with slight to moderate damage. Poorly designed structures could receive moderate to severe damage. There will be strong to violent shaking in epicentral area, and it can be felt in wider areas up to hundreds of kilometers from the epicentre.

The region is no stranger to these types of events with an earthquake registering 7.6 occurring near Izmit in the north-west of Turkey in August 1999 killing about 17,000 people, while in September of the same year an earthquake registering 6.0 struck near Athens killing 143 people. In October 2011, an earthquake registering 7.1 occurred in eastern Turkey, near the city of Van, which left about 600 people dead.

Wow - terrifying to wake up to massively shaking room at 6.7 #earthquake on #Kos - thank god no one hurt, just shaken

— Tom Riesack (@QuietConsultant) July 20, 2017

While ensuring that employee and stakeholder safety is paramount, organizations need to ensure they are prepared for such events, certainly those in regions where earthquakes are a distinct possibility. Earthquakes may not feature highly in the Business Continuity Institute's latest Horizon Scan Report, partly because they are very region specific, but there were still a quarter of business continuity and resilience professionals who expressed a concern about the possibility of their organization being disrupted by one.

Organizations must consider what would happen if they are affected by an earthquake, or any other type of disruption, what impact could that disruption have, could anything be done to prevent or reduce the risk, and how would they respond and recover. Furthermore they need to consider how they would communicate with their employees and stakeholders to ensure they are kept informed, and kept safe.

The Business Continuity Institute


Canadian businesses are lagging in their risk management approach and are more vulnerable to disruption when compared to their global counterparts, according to a report published by PwC Canada.

Managing risk from the front line revealed that 66% of Canadian respondents (vs 75% globally) had mandatory ethics and compliance training for all employees. When new risks emerge, less than 33% of Canadian businesses (vs 50% globally) reported periodic staff education about new or existing potential risks.

The report also found that future areas of risk and disruption for Canadian businesses will be in technology advancements (70% disruption predicted to 55% disruption globally), human capital (49% compared to 40%) and operations (37% to 26%). 

While Canadian businesses acknowledged that a big part of addressing their vulnerability to risk can be accomplished by moving risk management to the 'front line', many business operations are keeping risk management at the 'second line' (risk management/compliance) or 'third line' of service (internal audit).Respondents indicated that a lack of sufficient resources (skilled people) is the primary factor in preventing a shift in risk management to the first line.

The report reiterates that risk management from the second and third line does not give upper management a clear understanding of their own vulnerabilities. This type of risk management structure has resulted in an inability to manage risks effectively and adapt over time. 

"While Canadian businesses have made some progress when it comes to risk vulnerability, there is still a lot of work that needs to be done in order to catch up with their global competitors," said Kishan Dial, Partner, Risk Assurance, PwC Canada. "By moving risk management to the front line, the organization's leadership will obtain a greater understanding of the risks to their operations and enhance their capacity to manage risks in an agile and proactive way." 

The report makes three key recommendations for addressing business vulnerability:

  1. Shift duties and assign responsibilities: Each line of service should have a defined role regarding risk decisions, monitoring, oversight and assessment of vulnerabilities.
  2. Define risk appetite: Organizations must define risk appetite and leverage the technical tools available to them, including aggregation tracking and reporting.
  3. Establish a risk reporting system: Reporting structures should enable the first line of service, but also require the second and third line to monitor the first line's effectiveness.

"In order to address current and future challenges, Canadian firms must commit to strong risk management structures and processes in order to excel in an ever-evolving economy of the future," adds Dial.

The Business Continuity Institute


UK business leaders identify far fewer risks affecting their businesses, when compared to Germany and France, according to research from the Gowling WLG, suggesting an overly optimistic picture among UK business leaders. UK respondents consistently identified between 2% and 25% less than non-UK respondents for each risk area analysed.

The Digital Risk Calculator revealed that external cyber risks (69%) are thought to be the most concerning category of digital threat for businesses across all countries surveyed. This risk is anticipated to grow even further, with 51% of respondents believing that it will increase within the next three years. 

Commenting on the research Helen Davenport, director at Gowling WLG, said: "The recent wide ranging external cyber attacks such as the WannaCry and Petya hacks reinforce the real and immediate threat of cyber crime to all organisations and businesses.

"However, there tends to be an "it won't happen to me" attitude among business leaders, who on one hand anticipate external cyber attacks will increase over the next three years, but on the other fail to identify such areas of risk as a concern for them. This is likely preventing them from preparing suitably for digital threats that they may face."

Other digital risks of concern to participants include customer security (57%), identity theft / cloning (47%) and rogue employees (42%). More than a third of respondents (40%) also believe that the lack of sufficient technical and business knowledge amongst employees is a risk to their business.

Additionally, one third (32%) of UK businesses feel that digital risks related to regulatory issues have increased during the past three years. However, less than a third (29%) believe that regulatory issues are a risk to their business.



With cloud providers IBM, Microsoft, and Google releasing their quarterly financials within the week, and Amazon soon to follow, the folks at Synergy Research Group have polished their crystal ball in order to determine where it’s all going. They predict good fortune for those in the cloud business, as well as for developers of software that runs in the cloud. The news isn’t quite so stellar for those selling hardware and software to private enterprise data centers, however.

In a report released Monday, Synergy said it expects worldwide revenues from cloud and SaaS services to grow at an average annual rate of 23-29 percent over the next five years and pass the $200 billion mark in 2020. This will come alongside an 11 percent annual growth in sales of infrastructure to hyperscale cloud providers.

Public clouds will see the strongest growth, with an average gain of 29 percent annually, followed by managed or hosted private cloud services at 26 percent and enterprise SaaS at 23 percent. APAC will be the highest growth region, followed by EMEA and North America. The highest growth areas will be databases and IoT-oriented IaaS/PaaS service.



Wednesday, 19 July 2017 16:47

Cloud Market Forecast to Hit $200B by 2020

(TNS) - Cherokee County, Okla., will soon boast a new program to keep residents informed when disaster strikes, after the Board of Commissioners approved a new mass communication system for Emergency Management.

CivicReady, a product of CivicPlus, will alert citizens with time-sensitive information, ensuring effective communications that could keep them safe. Tahlequah and Cherokee County EM Director Mike Underwood said he wishes the new system was in place last week.

"Last week, when we had the bomb threat here, that would have been a pretty good tool to not only take care of our citizens and let them know what was going on, but we could also have grouped in all of our employees," said Underwood. "With one phone call, it would taken care of pretty much everybody, instead of having to hunt and make sure you've got everybody."

In the past, Underwood has used Blackboard to spread the word about immediate emergencies. However, he said CivicReady will likely end up being cheaper at $7,000 annually, and will include extra features.



LITTLE ROCK, Ark. – Arkansas disaster survivors whose homes were damaged in the severe storms, tornadoes, straight-line winds and flooding between April 26 and May 19 do not have to wait for an insurance settlement to apply for federal assistance.

Survivors with insurance may register with FEMA for grants for temporary rental assistance, essential home repairs and other disaster-related needs not covered by insurance.

Registration is encouraged even if survivors have insurance coverage. Policies vary in coverage and may not pay for temporary housing or have other insurance gaps.

Once registered, applicants with insurance policies covering storm-related loss and damage are mailed a "Request for Information" as part of FEMA’s verification process to avoid duplicating insurance payments. By law, federal assistance cannot duplicate assistance provided by other sources.

Waiting on the insurance settlement may make a disaster survivor miss the FEMA deadline to apply and lose the opportunity to apply for federal disaster assistance.

Federal assistance is available to eligible individuals and households in 16 Arkansas counties: Benton, Boone, Carroll, Clay, Faulkner, Fulton, Jackson, Lawrence, Prairie, Pulaski, Randolph, Saline, Washington, White, Woodruff and Yell. Damage or loss from the severe storms, tornadoes, straight-line winds and flooding must have occurred between April 26 and May 19.

To register for FEMA disaster assistance:

  • Call the FEMA Helpline at 800-621-3362. Multilingual operators are available. Persons who are deaf, hard of hearing or have a speech disability and use a TTY may call
    800-462-7585. If you use 711 or VRS (Video Relay Service) or require accommodations while visiting a Disaster Recovery Center, call 800-621-3362. The toll-free numbers are open daily from 7 a.m. to 10 p.m.
  • Go online to DisasterAssistance.gov (also in Spanish);
  • Download the FEMA mobile app  (also available in Spanish) at Google Play or the Apple App Store.

If you are a homeowner or renter, FEMA may refer you to SBA. SBA disaster loans are the primary source of money to pay for repair or replacement costs not fully covered by insurance or other compensation. Homeowners may borrow up to $200,000 to repair or replace their primary residence. Homeowners and renters may borrow up to $40,000 to replace personal property.

There are three ways to apply to SBA after you register with FEMA:

  • Call SBA at 800-659-2955. Individuals who are deaf or hard of hearing may call
  • Apply online using the Electronic Loan Application via SBA’s secure website at: https://disasterloan.sba.gov/ela.
  • Apply by mail: Complete a paper application and mail it to SBA at
    14925 Kingsport Road, Ft. Worth TX 76155-2243.

Visit a Disaster Recovery Center for personal help. Locations are found at the FEMA DRC Locator or at SBA disaster loan.

For updates on the Arkansas response and recovery, follow the Arkansas Department of Emergency Management (@AR_Emergencies) on Twitter and Facebook and adem.arkansas.gov. Additional information is available at fema.gov/disaster/4318.


FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

(TNS) - Oakland, Calif., officials in this fire-ravaged city reacted with alarm Monday over a report by this news organization that almost 80 percent of firefighter referrals to inspect dangerous conditions went ignored over the last six years.

“It is horrifying,” Councilwoman Rebecca Kaplan said of the investigation’s findings. “In fact, one of the issues (the story) identified is how it gets decided who gets inspected.”

In early 2017, a few months after the Ghost Ship warehouse fire killed 36 people, Kaplan proposed reprioritizing which businesses get inspected. Kaplan said she had heard from residents who said their businesses received multiple inspections, while others were never inspected.



FedEx Corp has disclosed in a securities filing that its international delivery business, TNT Express BV, was significantly affected by the June 27 Petya cyberattack.

Apparently, the courier company did not have cyber insurance or any other insurance that would cover losses from Petya, according to this report by The Wall Street Journal, via the I.I.I. Daily.

A new emerging risk report from Lloyd’s and risk modeling firm Cyence notes that cyberattacks have the potential to trigger billions of dollars of insured losses, yet there is a massive underinsurance gap.



Wednesday, 19 July 2017 16:35

Cyber protection gap akin to nat cat

The Business Continuity Institute

There’s no point in saying “it will never happen to me” as disruptions are always just around the corner, regardless of what sector or location you are in. This reality was brought home to us overnight as thunderstorms with strong winds and heavy rain swept across the south of England. The problem was exacerbated by dry weather in recent months leaving the ground hard, so rain water could not easily soak away, resulting in flash floods.

The aftermath was plain to see this morning – standing water, trees down and debris brought by the flooding scattered everywhere. Last night there were reports of the urgent need for sandbags as water levels rose, and several local restaurants had to be evacuated as the water eventually did enter the building.

Of course there’s no reason to worry and BCI Central Office is in not in any danger of flooding. But it is a reminder that we, the BCI, along with every other organization, need to have a business continuity plan to deal with such events. What would have happened if flood water had entered the building, what would have happened if staff could not get to work because of travel disruptions, what would have happened if power had been cut off due to the storms? All these things need to be considered in advance if we are to remain a functional organization despite whatever disruption comes our way.

Thankfully we do have a business continuity programme in place, so should the worst happen then we will be prepared for it. For well over a year we have had a team made up of CBCIs and DBCIs working in Central Office, led by one of our Fellows and championed by a member of the Board.

The team have been working hard to ensure that threats and consequences are analysed, priority activities are declared, and processes are in place to make sure those priority activities can continue in the event of a disruption. To date it has worked, but we would never rest on our laurels and become complacent, rather we ensure it is an evolving process that continues to develop based on changes at Central Office, the result of actual disruptions, or the outcome of exercises.

This programme will be developed further as we are now recruiting for a dedicated business continuity professional to take it forward.

Business continuity is clearly important to our members, so it is vital that we practice what we preach and have a business continuity programme to be proud of, and we like to think we have achieved this.

David Thorp
Executive Director of the Business Continuity Institute



Earlier this year, the world recognized World Backup Day (WBD) as a reminder to everyone that data is important and has to be protected. As part of the WBD recognition, Barracuda ran a series of blog posts on the reasons why companies lose data even when they do almost everything right.

As a follow up to our WBD activities, Barracuda conducted a survey of general technologists whose responsibilities include data protection and recovery. To be blunt, some of these results are alarming. In this article, we are going to run through the results, explain what they mean, and take a look at how to resolve these issues of concern.


As you know, ransomware is a global epidemic and is expected to cost over $5 billion in damages in 2017. Ransomware is a dangerous attack because it doesn’t just make a system unavailable; it renders the data unusable. This has already caused a great deal of trouble for healthcare institutions, government entities, law enforcement agencies, and of course, businesses all over the world. If you’ve fallen victim to a ransomware attack, there are only two ways to get your data back without paying the ransom: get a free decryptor from a service like this one, or fall back on your data protection strategy and recover your data.

Some victims have no choice other than to pay the ransom or lose their data. This is an unfortunate situation, because even if the ransom is a small amount, there are a number of problems with this course of action:

  • Criminals know you are willing to pay a ransom and are more likely to target you again
  • There is no way to know that the criminals will or can decrypt your data
  • Decryption might not work properly and you may lose data anyway
  • Law enforcement agencies and other authorities discourage rewarding the criminal by paying the ransom

You can leave your data decryption and recovery up to chance, or deploy a comprehensive strategy before the attack.

Data Protection and Recovery

There are a number of definitions for “data protection,” but the common theme is that it requires more than running a backup. Proper data protection is included in the security planning: it includes business continuity and disaster recovery planning, as well as the many security practices involved in preventing unauthorized access. The Barracuda survey focused on data recovery, which is ultimately what system administrators are trying to provide for their companies. Comprehensive data recovery involves data availability and data accessibility at all times.

Availability vs Accessibility

Let’s start with a quick overview of what these are. When we talk about the availability of a data backup, we’re talking about the data that is stored as a backup. In the case of a tape-based or a disk-based system, the data that is backed-up is available on the tape or on the disk.

Data accessibility refers to how easily it can be accessed for recovery. In our examples above, the data is not accessible unless the tape or disk is with a compatible system. Accessibility for that system may be close to 100% for an administrator in a server room, but may be reduced to zero while the administrator is off-site or away from a designated computer. Meanwhile, the availability of the data remains the same.

When questioned on the importance of availability and accessibility, 70.3% of respondents say that these two are equally important. This indicates that our respondents understand the value of the data as well as the value of recovering the data quickly, possibly from a remote location or even a mobile device.

Protecting Multiple Locations

Perhaps one of the reasons that so many respondents value accessibility as highly as availability is that 53.4% are responsible for data recovery in more than one location. That means that the majority of the respondents are working remotely at least some of the time. Their data recovery systems have to be accessible from more than one location and probably by more than one method.

50.6% of respondents say that their backups are cloud-based, and 76% of respondents replicate their data backups in the cloud. These numbers suggest that the 77.4% who say they have a disaster recovery plan are using the cloud for redundancy and accessibility. Cloud based data recovery is generally performed through a web browser with no need for special hardware.

The Bad News

There are two data points that cause some concern among the Barracuda data protection professionals. The first is that 81.2% of respondents do not test their data protection strategies more than once per year, and about half of that number do not test them at all. This could be a major pain point for these respondents. As we mentioned earlier, data recovery may be the only way to avoid paying a ransom that may or may not result in the decryption of data.

Another point to consider is that it’s good business to test the company resources. If the company has invested in the technology and planning to protect the data, then these things should be tested on a regular basis. User files change in value, applications are added or replaced, data is moved … these are all reasons to be testing backups more than once per year. Perhaps an application upgrade uses a new database instead of the old flat files. Perhaps a new application was never added to the data protection plan.

The second point here deals specifically with Office 365.  Nearly 66% of Office 365 administrators are relying on the Recycle Bin for backup. Only about 1/3 of our respondents are using a data protection solution to protect their Office 365 deployments.

The Microsoft Recycle Bin is a nice feature, but it’s job is to help the organization safeguard against accidental data loss. It’s not meant to be a data recovery solution. It doesn’t offer the features necessary to protect Exchange, Sharepoint, OneDrive, and the other services. Default retention times are not standard across services, so administrators may not even have the minimal protection that they expected. Data is non-recoverable once it is deleted or ages out of the Recycle Bin. Companies that have to work within compliance frameworks and liability requirements may find that the native Microsoft tools do not meet the regulatory standards.

What Next?

If you find yourself in one of the scenarios that we identified as “bad news,” don’t worry too much. These are things that can be fixed quickly, and then improved upon as you go along. You can start right now by evaluating your current data protection and recovery plan. Do you have one? Who is responsible for the deployment and management of the plan? Is the plan being tested? Are there any gaps between your recovery objectives and the capabilities of your data recovery solutions?

One of the most important questions for you to consider is whether your data protection and recovery plans are part of your security strategy? If you work in an environment where data protection is separate from security, it’s time to bring those two functions together. In the age of ransomware, they cannot be separated.

Rod Mathews is SVP & GM, Data Protection Business at Barracuda Networks.  Connect with him on LinkedIn here

Wednesday, 19 July 2017 16:09

Data Recovery in the Age of Ransomware

The Business Continuity Institute

One in eight global business decision makers believe that poor information security is the ‘single greatest risk’ to the business, according to a study by NTT Security, which also found that 57% believe a data breach to be inevitable at some point.

The 2017 Risk:Value Report highlighted that the impact of a breach will be two-fold, with respondents expecting a breach to affect their long-term ability to do business, together with short-term financial losses. More than half (55%) cite loss of customer confidence, damage to reputation (51%) and financial loss (43%), while 13% admit staff losses and 9% say senior executive resignations would impact them.

56% of business decision makers say their organization has a formal information security policy in place, up from 52% in 2015. Just over a quarter (27%) are in the process of implementing one, while 1% have no policy or plans to do so. However, while the vast majority (79%) say their security policy has been actively communicated internally, a minority (39%) says employees are fully aware of it. Germany and Austria (85%) are above average in communicating the policy, together with the US (84%) and the UK (83%).

Less than half (48%) of organizations have an incident response plan, although 31% are implementing one. But just 47% of decision maker respondents are fully aware of what the incident response plan includes.

The study also found that many global business decision makers are still unaware of the implications of the forthcoming General Data Protection Regulation (GDPR), as well as other compliance regulations, with one in five admitting they do not know which regulations their organization is subject to. Just four in ten (40%) respondents globally believe their organization will be subject to the EU GDPR.

Coming into force in May 2018, the legislation leaves companies with less than a year to comply with strict new regulations around data privacy and security and could result in penalties of up to €20 million or 4% of global annual turnover, whichever is higher.

With data management and storage a key component of the GDPR, the report also reveals that a third of respondents do not know where their organization’s data is stored, while just 47% say all of their critical data is securely stored. Of those who know where their data is, fewer than half (45%) describe themselves as ‘definitely aware’ of how new regulations will affect their organization’s data storage.

Data breaches are already the second greatest cause of concern for business continuity professionals, according to the Business Continuity Institute's latest Horizon Scan Report, and once this legislation comes into force, bringing with it higher penalties than already exist, this level of concern is only likely to increase. Organizations need to make sure they are aware of the requirements of the GDPR, and ensure that their data protection processes are robust enough to meet these requirements.

“In an uncertain world, there is one thing organizations can be sure of and that’s the need to mark the date of 25 May 2018 in their calendars," according to Garry Sidaway, SVP Security Strategy & Alliances at NTT Security. “While the GDPR is a European data protection initiative, the impact will be felt right across the world for anyone who collects or retains personally identifiable data from any individual in Europe. Our report clearly indicates that a significant number do not yet have it on their radar or are ignoring it. Unfortunately many organizations see compliance as a costly exercise that delivers little or no value, however, without it, they could find themselves losing business as a result, or paying large regulatory fines."

The Business Continuity Institute

Employees at 40% of businesses across the globe hide IT security incidents in order to avoid punishment, according to a study conducted by Kapersky Lab, and the dishonesty is most challenging for larger-sized businesses. 45% of enterprises (over 1,000 employees) experience employees hiding cyber security incidents, with 42% of SMBs (50 to 999 employees), and only 29% of VSBs (under 49 employees).

The report - Human Factor in IT Security: How Employees are Making Businesses Vulnerable from Within - revealed that not only are employees hiding incidents, but also that the uninformed or careless employees are one of the most likely causes of a cyber security incident - second only to malware. While malware is becoming more and more sophisticated each day, the surprising reality is that the evergreen human factor can pose an even greater danger. 46% of IT security incidents are caused by employees each year - that’s nearly half of the business security issues faced triggered by employee behaviour.

Staff hiding the incidents that they have encountered may lead to dramatic consequences for businesses, increasing the overall damage caused. Even one unreported event could indicate a much larger breach, and security teams need to be able to quickly identify the threats they are up against to choose the right mitigation tactics.

“The problem of hiding incidents should be communicated not only to employees, but also to top management and HR departments,” said Slava Borilin, security education program manager at Kaspersky Lab. “If employees are hiding incidents, there must be a reason why. In some cases, companies introduce strict, but unclear policies and put too much pressure on staff, warning them not to do this or that, or they will be held responsible if something goes wrong. Such policies foster fears, and leave employees with only one option - to avoid punishment whatever it takes. If your cyber security culture is positive, based on an educational approach instead of a restrictive one, from the top down, the results will be obvious.”

Borilin also recalls an industrial security model, where a reporting and ‘learn by mistake’ approach are at the heart of the business. For instance, in his recent statement, Tesla’s Elon Musk requested every incident affecting worker safety to be reported directly to him, so that he can play a central role in change.

The fear businesses have of being put at risk from within is clear in the results of the survey, with the top three cyber security fears all related to human factors and employee behaviour. Businesses worry the most about employees sharing inappropriate data via mobile devices (47%), the physical loss of mobile devices exposing their company to risk (46%) and the use of inappropriate IT resources by employees (44%).

While advanced hackers might always use custom-made malware and high-tech techniques to plan a heist, they will likely start with exploiting the easiest entry point - human nature. According to the research, every third (28%) targeted attack on businesses in the last year had phishing/social engineering at its source. Sophisticated targeted attacks do not happen to organizations every day - but conventional malware does strike at mass. Unfortunately though, the research also shows that even where malware is concerned, unaware and careless employees are also often involved, causing malware infections in more than half (53%) of incidents that occurred globally.#

The human element of cyber security was the key focus of Business Continuity Awareness Week 2017, organized by the Business Continuity Institute, with the report published by the BCI identifying the simple steps that everyone can take in order to play a part in improving cyber security.

“Cyber criminals often use employees as an entry point to get inside the corporate infrastructure. Phishing emails, weak passwords, fake calls from tech support - we’ve seen it all,” said David Jacoby, security researcher at Kaspersky Lab. “Even an ordinary flash card dropped in the office parking lot or near the secretary’s desk could compromise the entire network - all you need is someone inside, who doesn’t know about, or pay attention to security, and that device could easily be connected to the network where it could reap havoc.”

The case of Code Spaces still echoes in cyberspace. Code Spaces offered cloud facilities to developers and had a successful business model, until it became the target of a cyberattack.

The attack started as a DDoS (distributed denial of service) attack. Strangely enough, Code Spaces was alerted by the attacker to the possibility of stopping the attack by messages that the attacker left on the Code Spaces internal console, showing that the attacker had already penetrated Code Spaces systems.

When Code Spaces attempted to oust the attacker, the attacker retaliated by deleting large portions of Code Space data, and causing irreparable and fatal damage to the company, whose backup strategy failed to save it. So, what went wrong?



4 Techniques for Auditors

Data analytics has been discussed by the audit community for decades. Auditors and other assurance professionals of a certain age might well remember “computer-assisted auditing techniques,” better known as CAATs. Data analytics and CAATs were supposed to revolutionize audit and usher in an era of greater efficiency and audit coverage. Yet, despite the hype, this revolution never seemed to materialize.

Now the hype has shifted to advanced analytics techniques, such as predictive analytics, and related areas such as machine learning and robotics. While these tools and techniques will surely become very important, today’s typical audit department needs to first focus on getting the basics right.

This is the year data analytics in audit is truly taking off. CEB, now Gartner, recently conducted a survey comprising more than 270 global audit departments. We surveyed analytics use in 34 risk areas, ranging from fraud to M&A, and found that while the average organization has been using analytics for a year or longer in just six of them, they plan to apply analytics in the remaining 28 over the next year. Furthermore, 2017 marks the first time a typical audit department will use analytics in most phases of the engagement process, as well as in audit planning and the annual risk assessment.



Tuesday, 18 July 2017 15:51

Data Analytics Becomes Reality

The watchword for business continuity (BC) now and in the coming years will be complexity.

Evolutions in technology, organizational structure, banking, leadership, the global economy, and practically every existing discipline have begun to outstrip traditional methods that hoped to address and contain such complexity. As our everyday work moves from simple and complicated contexts (as envisioned by Ralph D. Stacey and explicated by Snowden and Boone) into complex contexts, we must create new approaches to function within the complexity. The Agile framework for project management is one such example of a new approach that embraces and thrives within complex contexts.

BC has begun to struggle with the reality of increasing complexities. Detailed recovery scripts, time-consuming BIA data collection, binders of documentation, and a linear lifecycle relatively unchanged since Y2K seem inefficient and outdated in this “Agile Age” of rapid acquisitions, social media, blockchain, holacracies, and the internet of things. The stark unpredictability of disasters combined with the nearly unimaginable constitution of the near future should give pause to anyone who believes BC can be done properly by just anyone armed with an internet template.

There is a way for BC to evolve to meet these challenges. First, it must establish a robust, theoretical foundation for the discipline, moving beyond an ad hoc collection of “professional practices.” Second, it must identify and implement alternative approaches that are nonlinear, iterative, and adaptive. Third, practitioners must find new and better ways to share proven practices with each other, and to offer real critique of both new and old practices. Fourth, the best BC professionals will no longer frame their work in terms of plans, but now in terms of portfolios, an evolving collection of recovery capabilities that can be brought to bear in times of adversity and disaster.

In this lecture, I provide an approach to establish a Business Continuity Portfolio Management Office (BC PMO). While this very brief presentation covers a lot of material (perhaps too much), it contains almost all the necessary theoretical and practical elements to provide a proper foundation for those who will create the very first BC PMOs in the industry.

– David Lindstedt, PhD, PMP, CBCP

David Lindstedt is the founder of Readiness Analytics, an organization focused on metrics, measures, and KPIs for recovery capabilities. Dr. Lindstedt is the co-author (along with Mark Armour) of the "Adaptive BC Manifesto and the Adaptive Business Continuity." He is also the creator of several supporting web sites including AdaptiveBCP.org, ReadinessTest.com, and Jeomby.com. Dr. Lindstedt has published in international journals and presented at numerous international conferences. He taught for Norwich University's Master of Science in Business Continuity Management.

The Business Continuity Institute

In the context of the manufacturing industry, business continuity is about ensuring products continue to reach and be delivered to customers, regardless of any internal problems or issues as that arise.

Like all businesses, manufacturers need to identify their critical value adding business activities and processes, focus on keeping them operational or getting them back to full operational capacity in a set time frame, regardless of the issues. This will then maintain the product delivery to the end consumers.

The basic principle of a manufacturer is to convert inputs (raw materials, ingredients, chemicals) into an output/product for sale. This is achieved by inputs undergoing transformational processes along the production line which add value at each stage. Labour, machinery and other tools combine to produce this production capability and thus, by the end of the whole production line, there is a product ready for sale.

What does a manufacturer need to consider to ensure business continuity?

To run a manufacturing production line effectively, you need to avoid disruptions in three key areas;

  • Staffing
  • Materials/Inputs
  • Machinery


In manufacturing, staff are needed to maintain and control the production line, ensure it stays operational and to spot early warning signs of any problems. Staff are integral in keeping the production line functional.

Ensuring staff have the proper training needed is vital to operational success. Lack of training amongst staff will cause mistakes and cause disruptions anywhere along the production line. Investing time and money in preparing a training package for new and current staff will help minimize mistakes and disruption.

Cross-training should also be considered. Training staff across the full range of business activities will ensure business activity continues if at any time a vital member of staff were to leave, fall sick or take holidays at busy periods.

Efficient staff recruitment processes may also be of value. Losing a number of employees simultaneously will cause disruptions and increased pressure on remaining staff (again, highlighting the importance of cross-training). Having other options such as agency workers or temporary staff is much quicker and easier to implement in the short term, allowing business to continue until more permanent positions are filled.


Inputs and raw materials are particularly important for manufacturers because without inputs, there can be no final output which in turn means no sales.

If a manufacturer limits themselves to one supplier of a material, and that supplier is unable to supply the material needed, then the manufacturer is also unable to produce their products. Therefore, manufacturers should have a diverse supply chain. Sourcing multiple suppliers of raw materials will minimize the risk and impact on the manufacturing process. If the primary supplier is unable to supply, the manufacturer has secondary options and ensure business continues.

No business wants faulty goods as this may mean product recalls and tarnish the brand image. Faulty goods can be a direct result of poor quality materials or inputs. Therefore, manufacturers should implement a quality Inspection procedure upon receiving the materials. This will help to ensure the inputs are of the required standard the manufacturer desires and reducing disruptions further along the production process.

Other non-tangible aspects also must be considered. For example, electricity supply is paramount to a manufacturer as it powers the machinery and other processes. Without it, the whole business grinds to a halt. Having a back-up generator installed will ensure business and manufacturing activities continue despite of power shortages or prolonged power cuts.


It is essential that you have factory equipment and tools fully functioning to carry out the manufacturing process. As a result, maintaining and checking that equipment is safe to use to critical.

You need to spend enough to ensure your machinery and equipment meets regulatory standards, preventative maintenance is a must for all manufacturing businesses. Preventive maintenance works on the same principle as servicing your car, except that servicing factory machinery tends to be a lot more costly! This is very important. Waiting until the machine breaks means you’ve waited too long!

The harsh reality is that customers have little interest in understanding manufacturing problems. They react in the same way you react to your suppliers, all you care about is the fact that they’re late. Customers are the same, they need their products, and if they can’t get them from their chosen source they might just go elsewhere!

Michael Conway is a founding director of Renaissance Contingency Services since 1987. He established Renaissance as Ireland’s premier IT Security Distributor and leading Independent Business Continuity Consultancy provider.

The Business Continuity Institute

Despite the increasing number of data breaches and nearly 1.4 billion data records being lost or stolen in 2016, the vast majority of IT professionals still believe perimeter security is effective at keeping unauthorised users out of their networks, according to a study by Gemalto.

The Data Security Confidence Index showed that businesses feel that perimeter security is keeping them safe, with most (94%) believing that it is quite effective at keeping unauthorised users out of their network. However, 65% are not extremely confident their data would be protected, should their perimeter be breached, a slight decrease on last year (69%). Despite this, nearly 6 in 10 (59%) organizations report that they believe all their sensitive data is secure.

According to the research findings, 76% said their organization had increased investment in perimeter security technologies such as firewalls, IDPS, antivirus, content filtering and anomaly detection to protect against external attackers. Despite this investment, two thirds (68%) believe that unauthorised users could access their network, rendering their perimeter security ineffective.

These findings suggest a lack of confidence in the solutions used, especially when over a quarter (28%) of organizations have suffered perimeter security breaches in the past 12 months. The reality of the situation worsens when considering that, on average, only 8% of data breached was encrypted.

Businesses' confidence is further undermined by over half of respondents (55%) not knowing where their sensitive data is stored. In addition, over a third of businesses do not encrypt valuable information such as payment (32%) or customer (35%) data. This means that, should the data be stolen, a hacker would have full access to this information, and can use it for crimes including identify theft, financial fraud or ransomware.

"It is clear that there is a divide between organizations' perceptions of the effectiveness of perimeter security and the reality," said Jason Hart, Vice President and Chief Technology Officer for Data Protection at Gemalto. "By believing that their data is already secure, businesses are failing to prioritize the measures necessary to protect their data. Businesses need to be aware that hackers are after a company's most valuable asset – data. It's important to focus on protecting this resource, otherwise reality will inevitably bite those that fail to do so."

With the General Data Protection Regulation (GDPR) becoming enforceable in May 2018, organizations must understand how to comply by properly securing personal data to avoid the risk of administrative fines and reputational damage. However, over half of respondents (53%) say they do not believe they will be fully compliant with GDPR by May next year. With less than a year to go, businesses must begin introducing the correct security protocols in their journey to reaching GDPR compliance, including encryption.

Hart continues, "Investing in cyber security has clearly become more of a focus for businesses in the last 12 months. However, what is of concern is that so few are adequately securing the most vulnerable and crucial data they hold, or even understand where it is stored. This is standing in the way of GDPR compliance, and before long the businesses that don't improve their cyber security will face severe legal, financial and reputational consequences."

The scale of the cyber threat is well known to business continuity and resilience professionals who identified cyber attacks and data breaches as their top two concerns, according to the Business Continuity Institute's latest Horizon Scan Report. It cannot be emphasised enough, just how important it is for organizations to have plans in place to respond to such incidents and help lessen their impact.

The Business Continuity Institute

3 in 10 (29%) travel managers report they do not know how long it would take to locate affected employees in a crisis, according to a new study by the GBTA Foundation, the research and education arm of the Global Business Travel Association, in partnership with Concur.

The study revealed that, overall, one-half (50%) of travel managers say, in the event of an emergency, they can locate all of their employees in the affected area within two hours or less. Additionally, three in five (60%) travel managers rely on travelers to reach out if they need help and have not booked through proper channels.

“Research reveals significant gaps in educating travelers about resources available to them and the existence of protocols should the unforeseen happen,” said Kate Vasiloff, GBTA Foundation Director of Research. “Failing to establish and communicate safety measures leaves travelers and organizations vulnerable. As both security threats and technology evolve, even the most robust protocols that once served companies well may now have weaknesses requiring immediate attention and modification.”

“With business travel and global uncertainties on the rise, companies today face more pressure than ever to ensure the safety of their travelers,” said Mike Eberhard, President of Concur. “If a crisis or incident occurs, it’s critical that businesses be prepared to quickly locate employees and determine who may need assistance.”

Travel managers play a key role in supporting travelers should disaster strike, which is why the vast majority (85%) of travel programmes include risk management protocols. Over the past two years, prevalence of domestic travel risk management protocols have increased to rival those of international travel. Despite this progress, there continues to be room for improvement as only three in five (62%) international travelers are given pre-travel information and even fewer (53%) are given information on local providers for medical and security assistance services before leaving the country.

Once it has been determined travelers are in an area experiencing a security threat, every minute spent trying to get in touch could be putting them in greater risk. Live personal calls (58%) and automated emails to business addresses (52%) are the most popular methods of communicating with travelers in an emergency.

Being able to communicate with employees during an emergency is a fundamental responsibility of the organization, either to ensure they are safe, or to pass on important advice. The Business Continuity Institute's latest Emergency Communications Report did deliver the encouraging news that most organizations (84%) do have some form of plan in place, although it did highlight that for those which don’t, two thirds (64%) felt that only a business-affecting event would incentivise them to develop one.

Creating Situational Awareness

Several prominent Wall Street firms are transitioning to a cognitive risk management environment. The changes they’ve made are significant, but there’s still work to be done. James Bone asserts that a more comprehensive approach is needed: one that includes intentional control design and machine learning – technology to help humans become more productive.

In my previous articles, I introduced human-centered risk management and the role cognitive risk governance should play in designing the risk and control environment outcomes you want to achieve.  One of the key outcomes was briefly described as situational awareness, which includes the tools and ability to recognize and address risks in real time.  In this article, I will delve deeper into how to redesign the organization using cognitive tools while re-imagining how risks will be managed in the future.  Before I explore “the how,” let’s take a look at what is happening right now.

This concept is not some futuristic state!  On the contrary, this is happening in real-time.  BNY Mellon, one of the oldest firms on Wall Street, has started a transformation to a cognitive risk governance environment.  Mellon is not the only Wall Street titan leading this charge.  JPMorgan, BlackRock and Goldman Sachs are hiring Silicon Valley talent among others to transform banking, in part, to remain competitive and to strategically reduce costs, innovate and build scale not possible with human resources.  The banks have taken a very targeted approach to solve specific areas of opportunity within the firm and are seeking new ways to introduce innovation to customer service and new product development and to create efficiencies that will have profound implications for risk, audit, compliance and IT now and in the foreseeable future.



IBM’s latest z Series mainframe, unveiled today, has a novel security feature the company says users have long wanted but couldn’t get: the ability to easily encrypt all their data, at rest or in motion, with just one click.

The 14th-generation mainframe, called IBM Z, introduces a new encryption engine that for the first time will allow organizations to encrypt all data in their databases, applications, or cloud services, with no performance hit, said Mike Perera, VP of IBM’s z Systems Software unit, in an interview with Data Center Knowledge.

“It’s a security breakthrough that now makes it possible to protect all the data, all the time,” he said. “And we’re really doing it for the first time at scale, which has not been done up to this point, because it’s been incredibly challenging and expensive to do.”



FDA late last year published new guidance documenting postmarket management of cybersecurity in medical devices. It seems prudent to recognize this guidance for exactly what it is: a wake-up call for the medical industry that we are in the 21st century and the potential for hacking any medical device, whether it is connected to a network or not, is a problem that must be taken seriously. In the guidance, FDA provides the means of demonstrating a risk-based management approach to cybersecurity and medical devices. The agency also provides mitigation and reporting requirements that are governed by other sections of the Code of Federal Regulations (CFR) pertaining to medical devices. So, while some may argue that this guidance has no teeth and cannot be enforced, if a patient is harmed or put at risk by a potential cybersecurity vulnerability, what company's attorneys are going to argue that their client chose to ignore potential cybersecurity impacts on their medical device because they felt the guidance “didn't have any teeth”?



Federal Emergency Management Agency (FEMA) officials today announced funding awards for the Fiscal Year (FY) 2016 Program to Prepare Communities for Complex Coordinated Terrorist Attacks (CCTA Program). The CCTA Program will provide $35.94 million to selected recipients to improve their ability to prepare for, prevent, and respond to complex coordinated terrorist attacks in collaboration with the whole community.

Terrorist incidents, such as those in London, England; Boston, Massachusetts; Nairobi, Kenya; San Bernardino, California; Paris, France; and Brussels, Belgium, highlight an emerging threat known as complex coordinated terrorist attacks. The FY 2016 CCTA Program is intended to enhance resilience and build capacity for jurisdictions to address complex coordinated terrorist attacks that may occur across the nation.

The selected recipients will receive funding specifically to develop and implement effective, sustainable, and regional approaches for enhancing preparedness for complex coordinated terrorist attacks, which include the following components: identifying capability gaps, developing and/or updating plans, training to implement plans and procedures, and conducting exercises to validate capabilities.

Applications were reviewed and scored independently by a peer review panel composed of subject matter experts representing federal, state, local, territorial and tribal organizations that have experience and/or advanced training in complex coordinated terrorist attacks. Awards were made on a competitive basis to applicants who presented an ability to successfully meet the requirements described in the NOFO, taking into how well the applicant demonstrated:

    • A need for funding support;
    • Effective, sustainable and regional approaches;
    • The proposed project’s impact that presents an increase in the jurisdiction’s preparedness and resilience to complex coordinated terrorist attacks once the project is implemented; and
    • A reasonable and cost-effective budget.


FY 2016 CCTA Program funding is awarded to the following recipients:

  • Arlington County Government (Va.): $1,244,890
  • City of Aurora (Ill.): $1,373,809
  • City of Chicago Office of Emergency Management and Communications (Ill.): $699,502
  • City of Dallas (Texas): $925,000
  • City of Houston (Texas): $1,759,733
  • City of Los Angeles Mayor's Office of Public Safety (Calif.): $1,223,225
  • City of Miami (Fla.): $723,260
  • City of Phoenix (Ariz.): $1,565,000
  • City of Winston-Salem (N.C.): $1,868,050
  • Durham County (N.C.): $931,500
  • East-West Gateway Council of Governments (Ill./Mo.): $1,474,716
  • Franklin County (Ohio) : $829,725
  • Galveston County (Texas): $976,896
  • Hawaii Department of Defense (Hawaii): $492,800
  • Illinois Emergency Management Agency (Ill.): $1,214,024
  • Indiana Department of Homeland Security (Ind.): $2,024,833
  • King County (Wash.): $1,516,723
  • Knox County (Tenn.): $536,250
  • Maryland Emergency Management Agency (Md.): $2,098,575
  • Metropolitan Washington Airports Authority (D.C./Va.): $595,098
  • Mid-America Regional Council (Mo.): $2,251,502
  • New York State Division of Homeland Security and Emergency Services (N.Y.): $1,379,048
  • San Bernardino County (Calif.): $1,334,751
  • South Carolina Law Enforcement Division (S.C.): $1,530,020
  • South East Texas Regional Planning Commission (Texas): $1,076,336
  • Texas Department of Public Safety (Texas): $659,556
  • Unified Fire Authority of Greater Salt Lake (Utah): $1,043,800
  • Virginia Department of Emergency Management (Va.): $2,001,568
  • Wisconsin Emergency Management (Wis.): $589,810

Follow FEMA online atwww.fema.gov/blog, http://www.twitter.com/fema, http://www.facebook.com/fema, and http://www.youtube.com/fema.

The Business Continuity Institute

A major global cyber attack has the potential to trigger $53 billion of economic losses, roughly the equivalent to a catastrophic natural disaster like 2012’s Superstorm Sandy, according to a scenario described in new research by Lloyd’s and Cyence.

Counting the cost: Cyber exposure decoded reveals the potential economic impact of two scenarios: a malicious hack that takes down a cloud service provider with estimated losses of $53 billion, and attacks on computer operating systems run by a large number of businesses around the world which could cause losses of $28.7 billion. By comparison, Superstorm Sandy, the second costliest tropical cyclone on record, is generally considered to have caused economic losses between $50 billion and $70 billion.

The study also revealed that, while demand for cyber insurance is increasing, the majority of these losses are not currently insured, leaving an insurance gap of tens of billions of dollars.

Inga Beale, CEO of Lloyd’s, said: “This report gives a real sense of the scale of damage a cyber-attack could cause the global economy. Just like some of the worst natural catastrophes, cyber events can cause a severe impact on businesses and economies, trigger multiple claims and dramatically increase insurers’ claims costs. Underwriters need to consider cyber cover in this way and ensure that premium calculations keep pace with the cyber threat reality.

For the cloud service disruption scenario, average economic losses range from US$4.6 billion from a large event to $53 billion for an extreme event. This is the average in the scenario, because of the uncertainty around aggregating cyber losses this figure could be as high as $121 billion or as low as $15 billion. Meanwhile, average insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss.

In the mass software vulnerability scenario, the average losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event. And the average insured losses range from US$762 million to US$2.1 billion.

The uninsured gap could be as much as $45 billion for the cloud services scenario – meaning that less than a fifth (17%) of the economic losses are actually covered by insurance. The insurance gap could be as high as $26 billion for the mass vulnerability scenario – meaning that just 7% of economic losses are covered.

The Business Continuity Institute

These days, most organizations that 'do' business continuity understand the importance of exercising and testing. Many have comprehensive exercising and testing programmes, which include crisis/incident management exercises, IT recovery tests and user relocation tests, amongst others.

It's not unusual for IT recovery testing to be done out of hours, in order to minimise any risk or impact to the business. The same is sometimes true of user relocation testing. But crisis or incident management exercises are almost always conducted during office hours.

The main reason is that exercising during office hours is more convenient, both for the participants and the facilitators, and there's usually (although not always) more chance of getting the key players to attend.

But exercising during the working day also has some distinct disadvantages. It doesn't, for instance, simulate in any meaningful way a situation where those key players have to deal with a major issue when they're already tired after a busy day's work. It doesn't test out of hours access to facilities or people. And out of hours is precisely when small incidents have a nasty habit of turning into bigger incidents, usually exacerbated by the fact that the right people aren't around to nip them in the bud.

Organizations with a mature crisis/incident management exercising programme should give serious consideration to carrying out the occasional out of hours exercise. This may be a little unpopular at first, until participants get the point, so rather than going the whole hog and starting your next exercise at 2am on a Sunday, perhaps a 7pm start on a weekday would be slightly more palatable.

There may be some moans and groans at first, but these are likely to be far outweighed by the resulting improvements to your crisis/incident management capability.

Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management. You can follow him on Twitter and his blog or link up with him on LinkedIn.

Monday, 17 July 2017 14:01

BCI: All in good time

No one ever calls for outages, and yet they happen all the time.

They’re about as predictable as the weather. There are no patterns or seasons for server crashes and data breaches. And when they strike can be just as surprising as how they strike.

Squirrels could mistake wires for nesting material. Hackers could infiltrate data when your guard is down. Even unexpected traffic surges can take down your servers if you’re not prepared.

With all the unknowns out there, there are some things you can control. Make sure your disaster recovery plan works for you. Test it thoroughly and regularly, and update it as your IT systems and business evolves. And don’t forget to keep your employees up to speed on your plans and processes to minimize human error and confusion during an emergency.

The weatherman in our cartoon is right (for once). Be ready for outages at any time, and you won’t be blindsided by an unexpected event.

Feel free to share this cartoon, with a link back to this post and this attribution: “Cartoon licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Based on a work at blog.sungardas.com

Imagine closed schools, overwhelmed hospitals and people dying by the thousands — or even millions. That’s the nightmare scenario for a flu pandemic.

But how likely is a pandemic to happen — and if it does, to develop into this worst-case scenario?

Pandemics are “like earthquakes: You know it’s coming, but you’re not quite sure exactly when,” said Joshy Jacob, associate professor of microbiology and immunology at Emory University. “The seasonal flu appears predictably annually. Pandemics happen unpredictably and often catch you by surprise.”

There are reasons both for alarm and for optimism, experts say. Medical research could lead to breakthroughs that would mitigate a flu pandemic. And government and private entities can make preparations to help them get through a bad pandemic if it occurs. But there is much work to be done.



Did you know that car manufacturers tend to choose the letters for their car model references according to the type of buyer they want to attract?

For example, letters around the middle of the alphabet are used when aiming at the family market (like 340L or 570M).

On the other hand, letters towards the end of the alphabet are considered to spark more interest in buyers looking for performance, power, acceleration, and so on (690X, 88Z, etc.). So, what do you think “management” suggests in a business continuity context, and is this word really any basis for long term BC success?



The many automated “out of office” messages that return when I send emails each day are a sure sign that summer vacations are in full swing. Whether people are enjoying life unplugged or preparing for a seasonal destination, one topic seems to dominate thoughts and conversations: the leisure getaway.

The conversation that occurs between friends and colleagues is just a pale reflection of the summer-vacation-themed chatter unfolding online, as travelers turn to social media to plan their itineraries. Forrester Data’s Consumer Technographics® insights reveal that nearly a fifth of US leisure travelers referenced travel-focused blogs, communities, or review sites to research their recent flight tickets, rental cars, and hotel accommodations. In fact, data shows that parents of young children who are in the throes of last-minute vacation prep are especially reliant on social channels: 26% reference peer-generated content before booking accommodation, and over a fifth do so when reserving an airline ticket or renting a car.

So, what are the social-savvy summer vacationers talking about this year? Taking the pulse of social listening data reveals the most passionate elements of current conversations:



This is part 2 of a 4-part series on organizational transformation.

When embarking on any type of journey, preparation and readiness are prerequisites for success. Whether it is planning for a trip, training for an athletic event, or transforming a business, one must first assess whether all the necessary pieces are in place to execute the plan and achieve the desired objective. Without this type of assessment, a critical part of the equation might be overlooked, resulting in the intended result and benefits not being fully realized.

Consider the marathon runner. This type of athlete does not begin an effort of this scale without the proper focus. Is he or she following an appropriate training plan for this type of event? Does his or her diet align with what is needed for the required level of energy and stamina? And perhaps most importantly, is the athlete’s overall health sufficient to undertake this type of endeavor? If the runner is not focusing on these aspects, the outcome may be disappointing.

Just like the marathon runner, organizations that wish to embark on a journey as significant as transforming their business models or underlying strategies, must ensure they are focusing on all the necessary aspects to be successful. A digital transformation is a perfect example of a highly disruptive change effort that requires organizations to have a sharp focus and consider all variables at play, including people, processes, and technology. While implementing a digital transformation is a different type of marathon, it nevertheless requires a proper ‘health check’ to ensure overall readiness.



Alongside the National Flood Insurance Program (NFIP), a thriving private flood insurance market would provide wider and in many cases cheaper coverage options, according to a new study.

Consulting firm Milliman, in partnership with risk modeler KatRisk, looked at three states – Florida, Texas, and Louisiana – which combined account for 56 percent of NFIP insurance policies in-force nationwide.

Its analysis compared modeled private flood insurance premiums to those of the NFIP.



Early detection of fire and smoke are essential to save lives, property and the environment. Modern technology such as video fire detectors, especially in some high-risk places like tunnels, oil and gas environments, public buildings or storage areas, enable a fast response to a potential fire. A new ISO technical specification on video fire detectors helps ensure more efficient and reliable equipment.

According to the Center of Fire Statistics (CFS) of the International Association of Fire and Rescue Services (CTIF), among 31 countries representing 14 % of the world’s population, fire services reported 3.5 million fires, 18.5 thousand civilian fire deaths and 45.0 thousand civilian fire injuries in 2015.

Video detection technology detects, identifies and analyses smoke at the first sign of fire or flame. The equipment’s understanding of the behaviour and movement of smoke enables users, located on site or remotely, to raise the alert and take appropriate action early.



The Business Continuity Institute

One in three (32%) security professionals lack effective intelligence to detect and action cyber threats, according to a new study from Anomali, which also revealed that almost a quarter (24%) believe they are at least one year behind the average threat actor. Half of this sample admitted they are trailing by two to five years. This confirms that many organizations are not adequately mitigating cyber risks.

The survey also signals that organizations struggle to detect malicious activity at the earliest stage of a breach, or learn from past exposures, which leaves numerous vulnerabilities undiscovered. Almost one in five (17%) of respondents haven’t invested in any threat detection tools such as security information and event management (SIEM), paid or open threat feeds, or User and Entity Behaviour Analytics (UEBA).

The findings of this study also demonstrate the need for organizations to possess an effective business continuity programme. If security professionals aren't able to detect or prevent cyber threats, then organization must have plans in place to deal with those that do get through to ensure they are not disruptive to operations.

Successful cyber attacks are not 'smash and grab' type of events. Rather, cyber criminals typically lurk undetected inside enterprises’ IT systems for 200 days or more before discovery. During this time attackers gain access inside the network, escalate privileges, search for high value information, and ultimately exfiltrate data or perform other malicious activities. This ‘200 day problem’ is an ever-present danger, but survey respondents rarely examine historical records to discover whether a threat actor has entered their system. Just 20% consult past logs daily, 20% weekly, 14% monthly and 22% said never or don’t even know how often. This results in multiple missed opportunities to help prevent a breach.

“The ‘200 day problem’ arises from the fact that logs are produced in such massive quantities that typically only 30 days are retained and running searches over long time ranges can take hours or even days to complete,” says Jamie Stone, Vice President, EMEA at Anomali. “Detecting a compromise at the earliest stage possible can identify suspicious or malicious traffic before it penetrates the network or causes harm. It’s imperative to invest in technologies security teams can use to centralise and automate threat detection, not just daily but against historical data as well.”

The number of data centers have been continuously increasing since 2009, yet this is all about to change. Experts predict that after peaking at roughly 8.6 million data centers in 2017, the number of data centers will begin to decline. The driving factor of this decline is the migration from smaller, in-house IT centers to data centers operated by larger service providers. Although the number of data centers will decrease, the data center space itself will not, because data center capacity will continue to grow.

Due to the shift to larger service providers, the role of the corporate data center will change as well. In the past, corporate data centers solely supported operations. Today, they have a variety of functions, including: testing new business models; developing and improving new products and retaining lasting relationships with customers. Because a data center must now be able to support a variety of functions, its infrastructure must have the ability to continuously change. This is much harder to accomplish for smaller data centers, which is why you will see these smaller data centers disappear in the coming years. Today’s data centers need to be flexible, dependable and easily scalable.

The shift to larger data centers provides organizations with the infrastructure needed to adapt to a variety of different needs. It is predicted that over the course of the next five years, most organizations will stop managing their own data centers. This will result in the steady decline of in-house data centers, and lead to a higher demand for larger service providers. According to International Data Corporation (IDC), by 2018, ‘mega data centers’ will account for 72.6 percent of all service provider data center construction projects.



Given the recent rash of ransomware attacks, businesses are finding that now is a good time as any to reevaluate their data backup strategies.

Nearly a third (32 percent) of organizations have been hit by ransomware, found a study from Imperva. Costs accrued by downtime were described as the biggest business impact of a ransomware infection for a majority (59 percent) of respondents.

This week, Hewlett Packard Enterprise (HPE) Software is giving security-conscious organizations new reasons to consider its Adaptive Backup and Recovery Suite by adding additional protections that keep backup data safe. It's a collection of data protection products that includes HPE Data Protector, Backup Navigator, Storage Optimizer and VM Explorer.



Far too often, organizations consider the public a liability — something to be rescued in an emergency situation. The opposite is true. The public is one of our greatest resources in times of crisis and should be included as an important part of your resilience planning and training.
The reality of emergency management is this: The bigger the disaster, the less likely the government can provide the best response.

For smaller disasters, there are multiple organizations that can respond, from the Red Cross to the Salvation Army to our own National Guard. For larger disasters, there is so much demand for assistance that we invariably fall short. We cannot get to people fast enough. In those situations, the tendency is to tell the public to be passive and wait. That is not the best solution and increases the number of lives lost.

In the case of almost any disaster, the fastest response will be from your neighbor. There are countless examples of this:



Thursday, 13 July 2017 15:19

The Public as a Resource

With apologies for paraphrasing Mr. Twain, pundits have sounded VMware’s death knell for years. Whether it be the continuous pressure of public cloud offerings, potentially losing the management tools game, or tech professionals evolving past their current offerings, the company faces some very real, critical threats.

Even so, VMware continues to succeed. In its most recent quarter results, VMware announced year-over-year revenue growth of 9% to $1.74 billion and GAAP net income at $232 million.

How do they continue to be successful?



The Business Continuity Institute

A large proportion of businesses fail to adequately protect their networks from the potential threat posed by ex-employees, with IT decision makers surveyed as part of a study by OneLogin claiming that over half (58%) of former employees can still access the corporate network. The study also found that nearly a quarter (24%) of UK businesses have experienced data breaches by ex-employees.

Nearly all (92%) of respondents admitted to spending up to an hour on manually deprovisioning former employees from every corporate application. Half (50%) of respondents are not using automated deprovisioning technology to ensure an employee’s access to corporate applications stops the moment they leave the business. This deprovisioning burden may explain why over a quarter (28%) of ex-employee’s corporate accounts remain active for a month or more.

Also, the study revealed 45% of businesses don’t use a Security Information and Event Manager (SIEM) to audit for application usage by former employees, leaving vital corporate data exposed to potential leaks.

“The sheer level of data breaches revealed by our study, coupled with the revelation that many businesses are failing to put simple processes in place to promptly deprovision ex-employees, should raise serious alarm bells for business leaders,” said Alvaro Hoyos, Chief Information Security Officer at OneLogin. “Our study suggests that many businesses are burying their heads in the sand when it comes to this basic, but significant, threat to valuable data, revenue and brand image. There should be no excuse for this negligence, which will be brought further into the spotlight when the European Union’s General Data Protection Regulation (GDPR) comes into effect in 2018. GDPR makes data protection a legal requirement for organisations, which could face fines of up to €20 million or 4% of their annual turnover, depending on which is higher.”

“With this in mind, businesses should proactively seek to close any open doors that could provide rogue ex-employees with opportunities to access and exploit corporate data. Tools such as automated de-provisioning and SIEM will help close those doors with ease and speed, while also enabling businesses to manage and monitor all use of corporate applications. The first step is acknowledging the problem, which businesses now have done by confessing they are aware of the issue, they now need to take steps to fix this issue by utilising the available tools,” concludes Hoyos.

The Business Continuity Institute

“Trust takes years to build, seconds to break, and forever to repair,” or so the quote says. While there may be a degree of flexibility with those timings, the principle that it takes much longer to build a reputation than to break it is absolute. Reputation means a lot to organizations and constitutes a significant proportion of its value.

I have been reading a lot of articles recently about reputation and the number of organizations that have had their reputation damaged, sometimes through no fault of their own.

We published an article recently about false claims against travel operators and the affect these claims, however inaccurate they are, can have on the reputation of the business. Why would you go on holiday with a travel operator that has a high rate of sickness among its guests?

There was a story this morning published by the BBC that discussed how it will take a generation for Chelsea and Kensington Council to be trusted again following the Grenfell Tower fire. When people feel so let down by an organization, especially in a situation when lives have been lost, it is not easy to forget that and move on.

And we are inundated with stories of organizations that have experienced a data breach and consumers beginning to question why it cannot protect its data.

Damage to reputation can be devastating for an organization and perhaps the most famous story of all when it comes to reputation and the sudden loss of it, is that of Ratners, the high street jewellers. In his speech to the Institute of Directors, the chief executive of the company – Gerald Ratner – included the line:

"We also do cut-glass sherry decanters complete with six glasses on a silver-plated tray that your butler can serve you drinks on, all for £4.95. People say, "How can you sell this for such a low price?", I say, "because it's total crap."

The next day the share price plummeted and the company was on the brink of collapse.

It is this potentially disastrous impact that damage to your reputation can have that makes it a business continuity issue. Of course, that’s not to say that reputation management is the responsibility of the business continuity department, because clearly it’s not. But it is something that the business continuity department can play a role in.

Arguably loss of trust should be considered in the same light as loss of IT, loss of power, loss of building etc. The organization needs to consider what the potential impact could be, how the impact could be mitigated against, and what mechanisms could be put in place to ensure the organization continues to operate effectively and prevent it from being too disruptive

This is perhaps the perfect example of what we at the BCI have been speaking a lot about recently - management disciplines cannot work in silos any longer. On matters of reputation business continuity professionals should be engaging with communications professionals to ensure that crisis communications plans are in place and that the organization is prepared.

Is that easier said than done? Are we making progress in this respect? Your thoughts, as always, are welcome.

David Thorp
Executive Director of the Business Continuity Institute

Wednesday, 12 July 2017 15:56

BCI: Protecting your reputation

An ocean wave pulls away from the shore and then, as expected, it moves toward land again. But it keeps moving farther and farther inland. The water pushes over unsuspecting beachgoers, backyards and entire cities with startling speed. It leaves a wake of destruction in Indonesia that includes an estimated 230,000 deaths.

Several years later, a similar scene unfolds in Japan when ocean water flows onto land to submerge cars, homes and even a nuclear power plant that never again will return to functionality. That time, the flood waters claim approximately 16,000 lives.

The mind-boggling force of a tsunami is a horrifying spectacle, as the world witnessed in 2004 and 2011. Those disasters ingrained heart-wrenching images of water-borne tragedy into people’s minds around the world. For many Americans, though, such images depict a rare occurrence in far-off countries and not a phenomenon in the continental United States. But the reality is that a tsunami could happen here, and it would be equally devastating.



Wednesday, 12 July 2017 15:01

Surviving a Tsunami in the United States

With the increasing prevalence of IT hacks, intelligent business owners are becoming more aware of the importance of Business Continuity as a business skill. Staying resilient can determine the longevity of a business in today’s world. 

How many quotes about failing or falling have you heard?

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” – Napoleon Hill

“Our greatest glory is not in never falling, but in rising every time we fall.” – Confucius

“You drown not by falling into a river, but by staying submerged in it.” – Paulo Coelho



Today’s companies are often faced with the complex decision of whether to use public cloud resources or build and deploy their own IT infrastructures. This decision is especially difficult in an age of mounting data requirements when so many people expect limitless access and ultra-flexibility. For these reasons, cloud computing has become an increasingly popular choice for many organizations – though not always the right choice.

According to Right Scale’s 2017 State of the Cloud Survey, 85 percent of enterprises have a multi-cloud strategy.

Common reasons for using public cloud resources include scalability, ease of introductory use and reduced upfront costs. In many ways public cloud usage is considered the “easy button.”



We all know the frustration of phoning a call centre, only to be put on hold for an interminable amount of time or taken through a long and complex series of options before arriving at a dead end. And when we finally get hold of someone, it is usually to battle with the language barrier or be told to call back later – all while paying an extortionate rate for the call itself.

A survey amongst ISO members suggests that the general public is, on average, only mildly satisfied with customer contact centres, indicating there is much room for improvement. It is for this reason that two new International Standards on the subject have just been published.

ISO 18295-1, Customer contact centres – Part 1: Requirements for customer contact centres, specifies best practice for all contact centres, whether in-house or outsourced, on a range of areas to ensure a high level of service; these include communication with customers, complaints handling and employee engagement.



The Business Continuity Institute

Internet speeds are getting faster all the time with Internet Service Providers competing with each other to offer the fastest connections that can enable users to download entire videos in just seconds. But could that be about to change? Could ISPs have more control over the download speeds they offer? Ultimately, does this mean that ISPs could have more control over what we are able to download?

On the 12th July, tens of thousands of organizations will be joining a day of protest in support of net neutrality, the principle that ISPs treat everyone’s data equally, and they don’t get to vary the download speeds depending on the source of the data, or block sites altogether. The principle of net neutrality has often been described as 'first amendment of the internet' as it is about ensuring equality of access to online information.

In February 2015, during the Obama administration, the Federal Communications Commission (FCC) in the United States voted to strictly regulate ISPs and enshrine in law the principles of net neutrality. The vote reclassified wireless and fixed-line broadband service providers as title II 'common carriers', which gave the FCC the ability to set rates, open up access to competitors and more closely regulate the industry. Two years on however, and Trump’s new FCC chairman - Ajit Pai, previously a lawyer at one of the major ISPs, is attempting to overturn that decision.

Removing net neutrality could allow ISPs to create special fast lanes for content providers they have arranged deals with, or perhaps more of a concern is that they could slow down traffic from content providers who are considered rivals.

Even AT&T, previously opponents of net neutrality are claiming to support the protest. Bob Quinn, Senior Executive Vice President of External and Legislative Affairs at the telecoms giant, commented: "We agree that no company should be allowed to block content or throttle the download speeds of content in a discriminatory manner. So, we are joining this effort because it’s consistent with AT&T’s proud history of championing our customers’ right to an open internet and access to the internet content, applications and devices of their choosing."

Wednesday, 12 July 2017 14:15

BCI: Day of protest over net neutrality

The Business Continuity Institute

When it comes to new spending, IT departments have two rather clear priorities - secure their data and continue the transition to the cloud, according to the Computer Economics annual IT spending and staffing benchmarks study 2017/2018.

Given the constant array of new threats facing IT departments every day, it is no surprise that security is a major priority. Malware, ransomware, phishing attacks, and security breaches are a near constant in the media, with the cost of repairing the damage and regaining customer trust also increasing. At the same time, cloud applications and infrastructure not only improved security but also improve budget flexibility, which allows IT departments to more effectively respond to the needs of the business.

A net 70% of IT organizations reported increased spending on security/privacy. Not a single company reported a decrease in such spending. A net 67% of respondents reported increased spending on cloud applications. A net 52% and 51% reported increases in spending on cloud infrastructure and business intelligence, big data, and data warehousing, respectively.

The lowest priority for new spending was disaster recovery/business continuity, with a net of 38% reporting increases. Despite being the lowest priority, the study did report a noticeable increase in disaster recovery/business continuity spending growth. Only 33% of respondents last year reported increased spending in this area compared to 38% this year.

“We’re also seeing a modest increase in outsourcing spending,” said David Wagner, vice president, research, at Computer Economics. “A net of 27%, up from 20% last year, are increasing their spending on outsourcing. We’re also seeing outsourcing budgets as a total percentage of IT spending increasing.”

The Business Continuity Institute

A global survey of executives found that most view the world as increasingly risky, with many reporting a “significant operational surprise” over the past five years. However, the majority of executives also report that their organizations are not developing more robust risk management processes to help counter this increasing risk. This is according to a study published jointly by NC State’s Enterprise Risk Management (ERM) Initiative and the Association of International Certified Professional Accountants (AICPA).

The 2017 Global State of Enterprise Risk Oversight report revealed that approximately 60% of executives reported that the volume and complexity of their risks have increased over the past five years, though there was some variability across regions. 61% of executives in Europe and the UK reported an increase, 55% in Asia/Australasia, 76% in Africa/Middle East, and 59% in the US.

“These findings are particularly timely, given the political, economic and social uncertainties that businesses are facing in the United States and abroad,” says Mark Beasley, co-author of a report on the survey results and director of the Enterprise Risk Management (ERM) Initiative at North Carolina State University.

“The increase in risks, and the operational surprises, are tied to the dynamic global business environment,” Beasley says. “For example, Europe and the UK have seen issues ranging from the Brexit vote to immigration challenges, while Africa and the Middle East have dealt with a wide variety of challenges, such as disruptions caused by the ongoing war in Syria and conflicts with ISIS. The US has been comparatively stable, but we seem to have entered a period of domestic political uncertainty – which is not reflected in the survey – and of course issues abroad can have significant effects on US organizations.”

Given these widespread surprises and perceived increase in risks, one might think that executives are embracing ERM processes to better protect their organizations. But the survey found that the level of risk management oversight is relatively immature.

“All organizations engage in risk management, but conventional risk management is done in silos, whereas the ERM approach allows for a holistic overview of risks across silos,” Beasley explains. “In other words, it helps executives identify risks that span multiple silos, or that fall into blind spots that an organization might otherwise miss.”

However, few executives said that their organizations had put thorough ERM processes in place. For example, while 53% of executives in Europe reported increasing risks, only 21% reported having complete ERM processes in place. And only 24% of executives in the Africa region reported complete ERM processes, with the number rising to 26% in the US and 30% in the Asia region. In addition, 80% of executives surveyed reported that their organizations don’t conduct any formal risk management training for their executives.

“We’re seeing a major disconnect between how organizations perceive their challenges and how they are responding to them,” Beasley says. “However, we also found that boards of directors, especially outside the US, are calling for executives to be more proactive about addressing potential risks,” Beasley says.

Specifically, the survey asked executives whether their boards of directors were asking for “increased senior executive involvement in risk oversight.” 56% of executives in Europe said yes, with the number rising to 59% in the Africa region and 70% in the Asia region. But only 38% of survey respondents in the US reported the same pressure.

CEOs are becoming increasingly frustrated by organizations that over-emphasize the short term. And CECP — a coalition of CEOs that believes societal improvement is an essential measure of business performance — took notice. CECP is trying to redirect investor behavior to focus less on short-term events and more on corporate frameworks that are capable of generating long-term growth.

Daryl Brewster, CEO of CECP, talked with Deloitte Advisory’s Mike Kearney about the organization’s mission and how companies can create long-term value by being socially conscious. The win-win? Doing good can also be good for business. It can help build brand, engage employees and identify new markets.

“This isn’t just charity. This is about good investment. It’s not going to pay back in a month — but most good things don’t. But it can really have a positive and huge impact on the company.”



Acts of terrorism are on the rise globally. Over the past several weeks alone, the world has seen stabbings, shootings and bombings in Flint, Tehran, London, Kabul and Bogota.

We’ve spent the past several years researching how communities can prepare to provide urgent medical care to the large numbers of victims these events produce.

Given the persistent risk of terrorist attacks and large-scale accidents, it’s more critical than ever to learn from past incidents. That will ensure that first responders can work together effectively during the chaotic but critical minutes and hours after an incident.



Gary Wong is Director of Applications Engineering at Instor Solutions

Of all the natural disasters that can affect data centers, earthquakes are among the most damaging. Given the data center industry’s continued growth and expansion throughout California, these potentially catastrophic events are always top of mind for data center owners and operators.

With the passing of the 27th anniversary of the 6.9-magnitude Loma Prieta earthquake, centered within 10 miles of Santa Cruz, now is the time for data centers across California and other areas prone to seismic activity to reevaluate their earthquake disaster strategies and look at the availability of proactive protection plans.

Across the world, there are an estimated 500,000 detectable earthquakes each year; 10,000 in the area of Southern California alone. These sobering facts lead to some important questions: If an earthquake like the Loma Prieta were to strike again, how are data centers better protected now than 27 years ago? What would the projected loss be to your company and customers if a major earthquake hit? What is your company doing to protect the valuable data and physical assets in your facility?



It is no secret that the most successful companies are the ones that constantly refresh and energize their growth strategies to capitalize on new market opportunities and remain competitive –both during challenging economic times, as well as in periods of robust growth.  In addition to organic growth, leading companies also employ inorganic approaches to build and refine their portfolios, including mergers and acquisitions (M&A), divestitures, and carve-outs.

IT is often mismanaged as an M&A value lever.  The importance of IT integration cannot be overemphasized because it has the highest potential for mistakes, due to complexities, time constraints, and the need for unified mobilization across the organization.  This is compounded by leadership, employee, supplier, and shareholder concerns.

Effective IT integration is key in achieving cost and revenue synergies, which in turn, drive merger success.  Typical challenges for realizing IT synergies include duplicated applications and infrastructures, divergence of IT and business objectives, and the seemingly uphill task of merging two distinct IT organizations – each with its own processes, policies, and practices – to maintain service quality and control costs.



Ever since I got my first job in IT in the mid-1990’s, everyone has used a cloud in some form. Whether they referred to it as outsourcing, virtualization, central IT, or in some other way, the cloud existed and grew but it did little to stem the adoption of distributed computing. Yet at some point over the past few years, the parallel growth of these two technologies stopped and the cloud forged ahead. This shift indicates that companies have now fully embraced the cloud but remain unclear about how best and how soon to transition their IT infrastructure to the cloud and then manage it once it is there.

One of my first jobs in IT was as a system administrator at a police department in Kansas. During my time there, I was intimately involved in a project that involved setting up a cloud that enabled it along with other police departments throughout the state to communicate with state agencies. Setting this cloud up would enable our department along with others to run background checks as well as submit daily crime reports. While we did not at that refer to this statewide network as a cloud, it did provide a means to send and receive data and centralize store it.

However, the data that the police department sent, received, and stored with various state agencies represented only a fraction of the total data that the department generated and used daily. There were also photos, files, Excel spreadsheets, accident and incident reports, and many other types of data that officers and civilians in the police department needed and used to perform their daily duties. Since the state agencies did not need this data it was up to the police department to manage and house it.



The social responsibility movement started with debates about corporations having a responsibility to society – it is now recognized that people, planet and profit are mutually inclusive. Since these early discussions, the concept has seen many transformative moments, including the launch of ISO 26000, a standard which has gained traction and credibility in less than a decade.

“I thought I was the only one struggling to reconcile my career with the demands of family, but after this session, hearing from managers and other colleagues, I can see how it is possible to enjoy both raising children and my job!” Fujii is just one of a number of Japanese women working at global electronics company NEC Corporation, who attended an event supporting female career opportunities in a country where women’s active involvement in the workplace is sorely lacking.

To achieve its goal, NEC Corporation turned to ISO 26000, the world’s first voluntary standard on social responsibility, which has helped thousands of organizations operate in an environmentally, socially and economically responsible way. Since its publication seven years ago, ISO 26000 has been adopted as a national standard in over 80 countries (and counting!) and its text is available in some 22 languages. It is also referenced in more than 3 000 academic papers, 50 books and numerous doctorates, and is used by organizations of all shapes and sizes including Petrobras, Air France, British Telecom, NEC, NovoNordisk and Marks & Spencer, to name a few.



Monday, 10 July 2017 15:30

The rise of being “social”

While big data scientists are often perceived as the key to unlocking the potential value of big data, research conducted by the University of Kent indicates a different view.

Dr Maggie Zeng from Kent Business School, in collaboration with Professor Keith Glaister from the Warwick Business School, investigated the use of big data within five Chinese internet platform companies that have put big data at the heart of their operations.

They interviewed 42 individuals in senior management positions, including CEOs, at these firms, as well as conducting 34 interviews with partner firms and third-party developers, who work with these companies, to understand how they use big data internally and externally. They also analysed meeting minutes and business strategy documents to inform their research.

Their findings suggest that firms that hire many data scientists do not always generate better value creation opportunities. Rather, it was the process of data management where managers are able to ‘democratize, contextualize, experiment and execute’ around the use of big data that helped firms derive the most benefits.

This is based on four key areas that senior managers can facilitate:

Data democratisation: By allowing more employees to access and interpret data it gives firms a better chance of insights being derived and enables better cross-team collaboration to ensure the right questions are being asked and answered.

Data contextualisation: Ensuring other relevant business information is accessible to staff enables them to place the data they are working with in the wider context of the organisation and understand what the results they generate mean.

Data experimentation: Creating an environment where staff feel able to experiment with data on a ‘trial and error’ basis enables them to find new insights within the data that more rigid data analysis structures prevent.

Data insight execution: Managers must create a culture where insights derived from big data analysis can quickly be used to ensure the potential benefits the insights offer are realised.

The insights could help other businesses understand how to make better use of their ever-increasing data silos to enable strategic decision-making.

The research was published in a paper titled Value creation from big data: Looking inside the black box, in the journal Strategic Organisation.

The Business Continuity Institute

An ongoing internet outage in Somalia is costing the country $10m (£7.7m) each day, and sparking anger across the affected central and southern parts of the country, including the capital, Mogadishu. The outage is reported to have been caused by a commercial ship cutting an undersea fibre-optic cable more than two weeks ago, and is expected to go on for at least another week.

The post and telecommunications minister - Abdi Anshur Hassan - told a press conference that Somalia has lost more than $130 million so far.

Internet service providers have since resorted to using satellite communications to provide access the internet, however this remedy was described as weak and unable to cope with the huge demand.

Internet outages are a major concern for organizations across the world with the Business Continuity Institute’s latest Horizon Scan Report featuring it in third place on its list of threats. 80% of respondents to a global survey expressing concern about the prospect of an outage occurring. In Sub-Saharan Africa it was in second place on both the list of concerns and the list of actual disruptions.

After more than 20 years of conflict, internet usage is low in Somalia, with just 1.6% of the population online in 2014, according to estimates by the International Telecommunication Union.

The Business Continuity Institute

Plans to clamp down on bogus holiday sickness claims have been announced by the UK’s Ministry of Justice following concerns from the travel industry that more and more suspected false insurance claims for gastric illnesses like food poisoning are being brought by British holidaymakers.

Advice from the travel industry shows the upsurge of claims in this country – reported by the industry to be as high as 500% since 2013 – is not seen in other European countries, raising suspicions over the scale of bogus claims and damaging our reputation overseas.

Due to the reported increase in claims, and as many tour operators appear to settle them out of court, the costs to the industry are increasing. In addition to the high costs of settling these claims, the bogus complaints are also damaging to the reputations of those tour operators involved.

A major barrier to tackling the issue is that these spurious claims are arising abroad. Legal costs are not controlled, so costs for tour operators who fight claims can be out of all proportion to the damages claimed.

Ministers today said they want to reduce cash incentives to bring spurious claims against package holiday tour operators. Under these proposals tour operators would pay a prescribed sum depending on the value of the claim, making the cost of defending a claim predictable.

Justice Secretary David Lidington said: “Our message to those who make false holiday sickness claims is clear – your actions are damaging and will not be tolerated. We are addressing this issue, and will continue to explore further steps we can take. This government is absolutely determined to tackle the compensation culture which has penalised the honest majority for too long."

The Business Continuity Institute

Almost half a million people on the south western Japanese island of Kyushu have been advised to evacuate their homes after several days of torrential rain, brought on by a series of storms that followed Tropical Cyclone Nanmadol across the region. What was described as unprecedented levels of rain has resulted in mudslides, overflowing rivers and flooding.

The public broadcaster NHK reported that, since Wednesday, downpours of more than 550 millimeters were registered in Asakura City, in the Fukuoka Prefecture, which is about 50% more than usual for the month of July. The Meteorological Agency says some areas in the city of Iki, in the Nagasaki Prefecture, have had 'once-in-a-half century' downpours exceeding 300 millimeters over the previous 24 hours.

Poor road conditions prevented staff and deliveries from accessing the Daihatsu Motor plant in Oita, so all operations had to be stopped, and this is likely to be a scenario experienced by organizations across the region.

While ensuring that employee and stakeholder safety is paramount, organizations need to ensure that they are prepared for such events. Adverse weather came in at number five on the list of business continuity professionals' greatest concerns, according to the Business Continuity Institute's latest Horizon Scan Report, so it is something that needs to be prepared for.

Organizations must consider what would happen if they are affected by a flood, or any other type of disruption, what impact could that disruption have, could anything be done to prevent or reduce the risk, and how would they respond and recover. Furthermore they need to consider how they would communicate with their employees and stakeholders to ensure they are kept informed.

Tougher to do, and with tougher consequences if you get it wrong: these are the two big trends in IT risk management today.

While CIOs still lead as being the largest category of individuals responsible for ITRM, other categories like CEOs, CISOs, CFOs, and others also now stand at significant levels. Why?

Today’s business environment is also less forgiving than in the past. Operational glitches tend to be more severe, as do the business consequences. So, what could go wrong? And who in the organisation is responsible for mitigating the associated IT risk, other than the CIO?



China is a country of extremes, with well-developed industrialized cities flourishing while inhabited yet rugged and primitive regions struggle.

One of the remotest and historically poorest provinces in Southwest China—Guizhou—has come a particularly long way in a short time and is well on its way to becoming a hub for China’s push into big data. What resembled suburbia a decade ago has been converted into a new urban district complete with skyscrapers, a convention center, and data centers.

High-speed railways, bridges, tunnels, and added international flights linking it to domestic and foreign cities have lifted the province from isolation and connected it with the world.



The Business Continuity Institute

Photograph courtesy of Frank Schwichtenberg

There's a lot of prestige that comes with hosting a large international event like the G20 Summit - it puts the city firmly on the map and can position it as a major player on the international scene. That's not to mention the investment it brings in as leaders from the world's 20 most prosperous countries descend on it along with their various entourages, and the media circus that will inevitably follow.

Of course the positive side is not appreciated by all, and there will be people in Hamburg who are rueing the day it was picked to host one of the largest events on the international political stage.

The world leaders are still arriving, but already violence has broken out with a Porsche dealership burnt down. Windows are being boarded up and manhole covers sealed. The water cannons have been sent out to disperse demonstrators, 100,000 of whom are expected to turn up, and whose activities are only expected to intensify over the next few days.

It is always hoped that these events will have far reaching consequences in terms of the decisions made - migration, terrorism, climate change and trade will all be discussed at length, and it would be nice to think there will be some positive outcomes. Arguably resilience professionals should be keeping a close eye on these areas of discussion, as the outcomes could have implications for our organizations.

In the short-term however, there will also be far reaching consequences for organizations based in Hamburg, and the people who live there, who will experience severe disruption over the next few days as their city is put in lockdown.

Such is the disruption that these events bring, the German Foreign Minister - Sigmar Gabriel - has already suggested that, in future, they should be held at the United Nations Building in New York where security measures are already in place. At the moment the summit is hosted by the country that holds the rotating presidency, and security can cost in the region of €150 million.

Fortunately with events like this, organizations have plenty of time to prepare for them as they know they're coming. And as much as the violence that breaks out can be shocking, given previous experience, it shouldn't come as surprise. Most of us know exactly what to expect. Of course that doesn't offer any reassurance to the Porsche dealer. But for many, with some forward planning and stakeholder engagement, it should just be an inconvenience, rather than anything more destructive, as the city is temporarily put on hold.

Friday, 07 July 2017 14:52

BCI: When the circus comes to town

While watching the sun disappear below the horizon or stargazing at night from the deck are the staples of a cruise experience, vacationers also want to watch movies on-demand or browse the internet while in their cabins.

Much like a big hotel, a cruise ship usually has a data center onboard to provide digital services. While a data center on a ship is similar to one in a hotel – both have servers, storage, and networking gear to run software – there are some differences.

Cruise ships are mobile, speeding toward their next port of call in the Baltic Sea, the Mediterranean coast or the Canary Islands, and ensuring service availability means both primary and backup data center is usually on the same vessel, not miles apart.



If you ask cloud business leaders the key to growing in this industry you’ll get a lot of different responses. Any technology segment that’s growing so quickly is bound to shut out companies that don’t have the right strategy for getting their piece of the pie.

There are some clear keys to succeeding in the cloud, from being able to offer end users a wide range of products to having the financing in place to scale your business. Here are five ingredients for growing your cloud business:



A Digital Transformation

Increased sophistication in technology platforms, banking channels and digital initiatives has ushered in transformation in the banking industry. But these changes have also brought about increasingly sophisticated financial crimes. Bank fraud is now being committed by tech savvy criminals who find means to bypass the fraud detection rules bank platforms employ.


The last two decades have seen phenomenal transformation in the banking industry, through sophistication in technology platforms, banking channels and digital initiatives. Financial technology (FinTech) has brought about a complete revolution in the ease with which the common man does banking! From “brick” to “click,” banking today is not about visiting a bank’s physical branches as much as it is about conducting transactions online through the internet and mobile devices (mobile banking and digital wallets) at a click. Even ATMs are being reimagined to cater to a number of banking operations which could not be envisaged a decade ago!

This transformation has brought about enhanced agility, greater efficiency and flexibility in banking. But at the same time, there are widespread concerns about some complex problems banks are facing today, including sophisticated financial crimes, which are difficult to track using the regular rule-based financial crimes risk management systems. Bank fraud and money laundering are now being committed by tech savvy criminals who understand the systems and processes in place in banks to detect financial crimes and hence find means to bypass the detection rules to commit such crimes.

In this article, we try to explore the current fraud control frameworks in banks, the challenges faced by banks in fraud risk management and how emerging digital innovations can strengthen such frameworks, thereby reducing the risk of financial crime and ensuring improved regulatory compliance.



One of the most frequent discussions we here at MSPmentor have with managed services providers (MSPs) and vendors revolves around challenges in the relationships between the parties.

Many times it’s the MSPs complaining about vendors’ slow responses to support requests or disagreements about roadmap priorities.

Other times, it’s executives from vendors who voice frustration about MSPs’ unrealistic expectations or unwillingness to more fully utilize profit-generating features of their software products.

In an effort to foster greater understanding about such an important dynamic in the IT services provider ecosystem, MSPmentor will be exploring this topic during the second half of 2017 – and we want your help.



In so many ways IT operations has developed a military-style culture. If IT ops teams are not fighting fires they’re triaging application casualties. Tech engineers are the troubleshooters and problems solvers who hunker down in command centers and war rooms.

For the battle weary on-call staff who are regularly dragged out of bed in the middle of the night, having to constantly deal with flaky infrastructure and poorly designed applications carries a heavy personal toll. So, what are the signs an IT organization is engaged in bad on-call practices? Three obvious ones to consider include:


(TNS) - With the Atlantic hurricane season well underway, Lowndes County, Ga., officials don’t want folks to think “It can’t happen here.”

The county held its first public hurricane preparedness meeting last week at the James H. Rainwater Conference Center. On hand were county officials, representatives of volunteer organizations and experts on pet safety during evacuations, speaking in front of dozens of interested onlookers.

Home Depot assisted the county with the meeting by providing free five-gallon buckets to all who showed up. The buckets can be used for fast and easy emergency evacuation kits, said Ashley Tye, Lowndes County’s Emergency Management Agency director.



Thursday, 06 July 2017 15:25

Officials Urge Hurricane Readiness

Back at the dawn of the internet, data centers could be small and simple. A large ecommerce service could do with a couple of 19-inch racks with all the necessary servers, storage, and networking. Today’s hyper-scale data centers cover acres, with tens of thousands of hardware boxes sitting in thousands of racks. Along with the design changes, these mega-server farms have been built in new, remote locations, trading proximity to large population centers for cheap power.

As they automate data center operations, public clouds like Amazon Web Services or Microsoft Azure hire fewer and fewer highly skilled data center engineers, who are usually outnumbered by security staff and relatively low-skilled workers who do manual labor, such as handling hardware deliveries. Fewer staff managing more servers means monitoring the power and cooling infrastructure requires greater reliance on sensors, which we might now call Internet of Things hardware. They help identify issues to an extent, but there are many cases in which the experience of a seasoned facilities engineer is hard to replace with sensors. These are things like recognizing a sound that indicates a fan is about to fail or locating a leak by hearing the sound of water drops.

You need more than sensors to monitor modern data center infrastructure, and a new generation of applications aims to fill the gap by applying machine learning to IoT sensor networks. The idea is to capture operator knowledge and turn it into rules to help interpret sounds and video, for example, adding a new layer of automated management for increasingly empty data centers. The services promise “to predict and prevent data center infrastructure incidents and failures,” Rhonda Ascierto of 451 Research told Data Center Knowledge. “Faster mean time to recovery and more effective capacity provisioning could also reduce risk.”



More than $14 billion. That’s the expected insured loss from severe convective storms, thunderstorms, tornadoes, large hail and associated damaging winds in the United States in the first six months of this year.

From the Artemis blog, via Impact Forecasting, the catastrophe risk modeling center at Aon Benfield:

“The insurance and reinsurance industry faces more than $14 billion of losses after the first-half severe storm activity in the U.S., while the economic loss is set for $22 billion or higher, putting 2017 as the fourth most costly year for both economic and insured losses due to convective weather activity.”



Ah yes, agile, that buzzword that is being borrowed by so many parts of businesses! The word itself is full of promise, suggesting all kinds of good things, like flexibility, nimbleness, and adaptability.

Conversely, if you’re not agile, you’re clumsy, inflexible, and probably slated for disappearance in the near future. Some agile business continuity proponents borrow from the original agile manifesto drawn up by software developers to make a nifty, concise manifesto of their own.

Yet, while fossilised BC plans and attitudes have no place in successful BC management, we need to be careful not to slide the agile cursor too far over to chaos.



Last week we discussed combating insider threats, beginning with identifying them. This is such an important subject, that we want to help you identify some of the most common insider threats. As a reminder, insider threats are threats to a network, computer system or data that originates from a person with authorized system access. You should include mitigation practices for each of these in your Employee Security Policy as soon as possible. Why are we stressing this so much? Because cybercrime is too costly and prevalent to be ignored.

Intentional vs. Unintentional

Before we go into specific examples of insider threats, it’s important to make the distinction between intentional and unintentional threats.

Intentional threats or actions are conscious failures to follow policy and procedures, no matter the reason. People can act out of desire for revenge, theft, perceived justice, or even a well-intentioned need to work from home to complete a task. Unintentional threats or actions, such as misuse of access, neglect, or lack of diligence, can occur without forethought. Though we often think of a threat as something intentional and malicious, the most common events are those with unintentional results. That being said, a deliberate event can be the most devastating and long-lasting, especially when done with the intention of causing harm to the organization. As such, an Employee Security policy should be designed to protect your organization from both threat classifications.



Thursday, 06 July 2017 15:20

Common Insider Threats

It could be argued that digital technologies present more profound and disruptive opportunities and threats to established business models than anything that’s come before. In Digital Disruption of Business Models: The Mass Mitec Story, David Wortley charts the digital transformation of Mass Mitec, a UK-based small-to-medium enterprise, via a disruptive digital technology in the 1990s and uses the story to illustrate the potential and dangers of digital disruption.

Even though Mass Mitec had a very good understanding of the evolution of the technologies upon which its business models were based, and the organization had built a business development plan that reflected that evolution, it seriously failed to properly secure or exploit the business and contractual arrangements with its key partner.

The lessons David shares from this experience are relevant to today’s innovators and digital disrupters. These include:



The global cyberattack that has been wending its way across continents since Tuesday started creating real consequences at some businesses even as the virus’s spread seemed to be abating.

FedEx Corp. said it could suffer a “material” financial impact after the bug affected the worldwide operations of its TNT Express delivery unit. Danish shipping giant A.P. Moller-Maersk A/S shut down systems across its operations to contain the cyberattack and said the impact on its business is “being assessed.” The company’s APM Terminals unit closed its Port Elizabeth facility in New Jersey Wednesday and suspended gate operations Thursday.

Other companies were forced to resort to old-school business practices after taking corporate email offline to contain further contamination. Employees at global snack giant Mondelez International Inc. were working via cellphones, text messages and personal email, while law firm DLA Piper closed its systems as a “precautionary measure,” meaning clients couldn’t contact its team by email or land-line.



Across the globe, severe weather is a fact of life. The type of weather condition and its severity may vary from location to location, but it’s unavoidable and your organization will likely be impacted by it at some degree at least once. Hurricanes, tornadoes, wildfires, flooding, earthquakes, tropical storms, and much more have the potential to severely impact your business’ operations if you aren’t properly prepared.


Resiliency isn’t just about preparing your business continuity plan and checklist and having it in the hands of the managers and key stakeholders who need it, it’s about actively planning for the risks that may impact you and considering all possible outcomes. Once you’ve done all of that, a checklist can be developed to encompass what needs to be done regularly prior to a severe weather event, what needs to happen if a severe weather event occurs, and what needs to be done immediately following the natural disaster.

Without a team actively invested in the planning process, your business could face tremendous loss, not only in revenue but in work time, employees, etc. Severe weather events can, and often do, exact serious damage to assets, personnel, and day-to-day operations. More than that, your brand and the public perception of your organization are at risk, too. How you handle a natural disaster or crisis can make or break your place and/or reputation in the community.



Employees are a company's greatest asset, but also its greatest security risk.

"If we look at security breaches over the last five to seven years, it's pretty clear that people, whether it's through accidental or intentional introduction of malware, represent the single most important point of failure in terms of security vulnerabilities," said Eddie Schwartz, chair of ISACA's Cyber Security Advisory Council.

In the past, companies could train employees once a year on best practices for security, said Wesley Simpson, COO of (ISC)2. "Most organizations roll out an annual training and think it's one and done," Simpson said. "That's not enough."

Instead, Simpson said organizations must do people patching: Similar to updating hardware or operating systems, you need to consistently update employees with the latest security vulnerabilities and train them on how to recognize and avoid them



The Business Continuity Institute

If anyone has ever been to the west coast of Scotland, you'll be well aware that rain is an inevitability, even during the supposed summer months. It was therefore to my surprise that I read about an outdoor Green Day concert, due to be held last night in Glasgow, that had to be cancelled due to "adverse weather".

It does make you wonder about the lack of forethought that some people have. Clearly safety has to be paramount, and if it's not safe for the concert to go ahead then it has to be cancelled. But should this not be considered in advance? Should the concert organizers not have thought that it might rain on the west coast of Scotland, so put plans in place to remedy any impact of this?

As a result, several thousand music fans were sent home disappointed with only a few hours to go before the concert was due to begin. They may get their tickets refunded, but will they get their travel and accommodation refunded? Unlikely. Several hundred workers on zero-hour contracts were sent home unpaid. Can they afford to give up their time and not get compensated for it? Unlikely. And, of course, the organizer will lose out on the revenue they would have received from the event, not to mention the reputational loss.

At the Business Continuity Institute we publish our Horizon Scan Report each year which outlines the main threats that organizations face. This report sets the baseline for what those threats are, but it's essential that organizations conduct their own horizon scan in order to assess the threats relevant to them - their sector, their location, their size or their specific circumstances. If you're hosting an outdoor concert on the west coast of Scotland, then weather should have been picked up as a potential issue.

The organizer should have considered that rain was a strong likelihood and then thought through the potential implications of this. The organizer should have looked at what mechanisms could be put in place to prevent rain from becoming a health and safety issue.

Our organizations face disruptions all the time, but with some basic preparation in advance, we can make them ready to face those disruptions so they don't become damaging.

But, if we are to help make our organizations more resilient then we need to plan ahead. We need to think through our activities and what the potential risks are. Finally we need to take action to ensure that, should those risks materialise, we can still function normally, or as close to it as possible.

David Thorp
Executive Director of the Business Continuity Institute

Thursday, 06 July 2017 14:42

BCI: It always pays to plan ahead

The Business Continuity Institute

The UK remains an attractive place to live and work, but could face challenges in retaining large numbers of non-British workers, according to research by Deloitte, which also indicates significant changes in the UK labour market. Deloitte argues these changes will require a measured immigration approach, upskilling UK workers and making better use of automation for the UK to adapt successfully.

89% of non-British workers say they find the UK either quite attractive or highly attractive as a work destination and of those currently based outside the UK, 87% would consider moving to the UK if the right opportunity presented itself.

Highly-skilled non-EU citizens are the most likely to choose moving to the UK, 94% say they would move to the UK if they could, with 83% of highly-skilled EU citizens saying the same. Among less-skilled workers, 79% of EU nationals and 93% of non-EU nationals would consider moving to the UK.

For respondents based outside the UK, the UK ranked as the most desirable place to work with 57% of respondents placing it in their top three destinations, ahead of the US (30%), Australia (21%) and Canada (19%).

Respondents already in the UK were asked what attracted them to the UK. 51% put job opportunities in their top three choices, followed by cultural diversity (34%), better lifestyle (30%) and work-life balance (27%). For those outside the UK, 54% said job opportunities was a strength for the UK, followed by cultural diversity (43%) and work-life balance (40%). London was also cited by 37% of respondents as a strength, as was the UK’s global connections (30%).

Attitudes among non-UK citizens have shifted since the referendum on EU membership. 48% of migrant workers already in the UK see the country as being a little or significantly less attractive as a result of Brexit, compared to only 21% of workers outside the UK. Highly-skilled workers report the largest drop in the attractiveness of the UK. Of those currently living in the UK, 65% of highly-skilled EU workers and 49% of highly-skilled non-EU workers say the country is now less attractive. Among less-skilled workers, 42% of EU citizens and 25% of non-EU citizens say the country is now less attractive.

Overall, 36% of non-British workers in the UK say they are considering leaving the UK in the next five years, representing 1.2 million jobs out of 3.4 million migrant workers in the UK. 26% say they are considering leaving within three years.

Highly-skilled workers from EU countries are the most likely to consider leaving, with 47% considering leaving the UK in the next five years, versus 38% of highly-skilled non-EU workers. Among less-skilled workers, 27% of EU citizens and non-EU citizens say they are likely to leave in the next five years.

Overall, 58% of non-British workers say it will be difficult or very difficult to find a UK worker to replace them. This rises to 70% of highly-skilled EU workers and 56% of highly-skilled non-EU workers. Among less-skilled workers, 61% of EU workers, but only 33% of non-EU workers, say it will be difficult to replace them.

David Sproul, senior partner and chief executive of Deloitte North West Europe, said: “The UK remains a highly attractive place to work for people from around the world. Despite political and economic uncertainties, more people are attracted to live and work in the UK than anywhere else in the world. Nine out of ten overseas workers would consider moving to the UK if the right opportunity presents itself. The UK’s cultural diversity, employment opportunities and quality of life are assets that continue to attract the world’s best and brightest people.

“But overseas workers, especially those from the EU, tell us they are more likely to leave the UK than before. That points to a short to medium term skills deficit that can be met in part by upskilling our domestic workforce but which would also benefit from an immigration system that is attuned to the needs of the economy.”

Angus Knowles-Cutler, vice chairman and London senior partner, said: “The UK economy depends on migrant workers to plug gaps in both highly skilled and lower skilled jobs. If immigration and upskilling can help fill higher skill roles, automation can help to reduce reliance in lower skill positions. This will require careful consideration region by region and sector by sector, but there is a golden opportunity for UK workers and UK productivity if we get it right.”

The Business Continuity Institute

Staff at the Bank of England have voted overwhelmingly in favour of strike action in a ballot calling on their employer to give them a better pay deal. In the ballot, 95% voted for strike action which will be for the first time at the bank in over 50 years.

Unite has informed the Bank of England that its members working in the maintenance, parlours and security departments will be taking four days of strike action on 31st July, 1st, 2nd and 3rd August 2017. If both sides fail to resolve the pay dispute, the union will be consulting its members across other departments of the bank as part of the escalation plan.

"It is repeatedly said that staff are an organization's greatest asset, so if that is the case then we need to have plans in place to deal with their loss," said David Thorp, Executive Director at the Business Continuity Institute. "With the UK Government insistent that all public sector pay rises are to remain capped at 1%, it is likely that this will be the first of many strikes to be called across the country over the foreseeable future."

Unite regional officer Mercedes Sanchez said: “Staff at the Bank of England have made their anger clear by voting for strike action in July.  The result will be that the bank’s sites, including the iconic Threadneedle Street in the city of London will effectively be inoperable without the maintenance, parlours and security staff."

However, a spokesperson for the Bank of England responded that: "Should the strike go ahead, the Bank has plans in place so that all sites can continue to operate effectively.”

The Business Continuity Institute

As businesses increasingly become the target of sophisticated hacking attacks, there is a greater need for them to properly prepare themselves or face a hefty bill, including ‘slow burn’ costs such as reputational damage, litigation and loss of competitive edge. This is highlighted in a study by Lloyd's, produced in association with KPMG and DAC Beachcroft, which looks at the nature of the current cyber risk landscape as well as the top threats by industry sector.

Closing the gap – insuring your business against evolving cyber threats identifies ransomware – such as the WannaCry worldwide ransomware attack – as a rapidly increasing threat, together with distributed denial-of-service (DDoS) attacks and CEO fraud. The analysis also highlighted that financial services firms are the most targeted by organized cyber crime, but that retail is also increasingly being targeted.

Inga Beale, CEO of Lloyd’s, said: “The reputational fallout from a cyber breach is what kills modern businesses. And in a world where the threat from cyber crime is when, not if, the idea of simply hoping it won’t happen to you, isn’t tenable.

“To protect themselves businesses should spend time understanding what specific threats they may be exposed to and speak to experts who can help handle a breach, minimise reputational harm and arrange cyber insurance to ensure that the risks are adequately covered. By reacting swiftly to mitigate the impact of a cyber breach once it has occurred, companies will be able to minimise the immediate costs and their exposure to subsequent slow burn costs.”

Matthew Martindale, Director in KPMG’s cyber security practice, said: “Cyber risk has moved up in the business agenda and businesses are taking measures to prepare themselves. However, they are failing to factor in the long-term damage that a breach can cause and the cost implications of it. Dealing with things like reputational issues and litigation in the aftermath of a breach, can add substantial costs to the overall loss. Businesses really need to start thinking about the cyber risk holistically rather than one that is currently very short sighted.”

Hans Allnutt, Partner, Head of Cyber and Data Risk at DAC Beachcroft, said: “Whilst the immediate business impact of a breach could be significant for any organization, it may only be the tip of the iceberg when it comes to dealing with the legal consequences which may last months or even years. Once notified, it is not uncommon for regulatory investigations to take more than a year before they reach a conclusion. Subsequent litigation can take even longer, particularly because the law surrounding data security and privacy is a relatively evolving area. In one UK data protection case, it took three years and a failed appeal before the litigation was finally settled.”

The Business Continuity Institute

The nature and effects of the recent terrorist attacks in London and Manchester are broadening the industry's understanding of terrorism insurance, and could result in a permanent shift away from policies based on damage to property.

Traditionally, terrorism policies have tended to kick in when there is damage to the property of the insured. But the real damage caused by the 'lone wolf'-style tactics adopted by the attackers at Westminster, Manchester Arena and London Bridge was loss of life, injuries and significant disruption to local businesses. So-called 'denial of access' cover, for example, tends still to be linked to property damage.

Insurers must therefore focus on how business interruption cover is being extended beyond the realm of property damage. The development of contingent business interruption cover in response to recent earthquakes and floods that have affected global supply chains is a good example of an alternative approach, although even here there has to be an element of damage to the supplier of a business, if not to the business itself.

We are seeing the growth of business interruption products such as those available in the cyber market in relation to data breaches that lead to loss of profits and other intangibles. However, these products are still in the relatively early stages and need further development.

A recent report by Pool Re, the UK's government-supported terrorism risk reinsurer, described as "unprecedented" the three recent attacks in the UK.

Pool Re's analysis found that the attacks had many common features. All of them were undertaken by Islamist extremists and have been claimed by Daesh, although the claims have not yet been corroborated. All three attacks took place in crowded places, including tourist locations and social venues, where civilians were going about their day to day lives. The attacks seemed to be timed to maximise casualties, and civilians were indiscriminately targeted regardless of age, gender or nationality.

Attacks of this nature would have been completely unforeseeable when Pool Re was established in 1993, in response in part to the IRA bombing of the Baltic Exchange in London in April 1992. That attack, which killed three people, destroyed the Exchange building and caused huge property damage in the centre of the City of London.

In those days, terrorists used bombs and sophisticated weapons and acted together. As a result, insurers continue to view terrorism risk as the risks of an organised plot or threat for doing damage to property. The result is a recognised 'insurance gap' for business interruption arising for non-property damage.

The recent examples show how substantial that gap could be. The Insurance Insider (registration required) has estimated the value of Ariana Grande's claim for cancelled tour dates in London and mainland Europe following the Manchester Arena bombing at £300,000. Take That, who had to cancel three shows due to take place at the Manchester Arena that same week, could receive between £500,000 and £1 million to cover the cost of rescheduling the shows, according to the same report. Although property damage to the arena itself is likely, the cost of business interruption - particularly due to the closure of Manchester Victoria train station for a week - will ultimately be far more significant.

The question now is how quickly insurers might be able to adapt to these new realities. However, the global insurance market is not renowned for its speed of movement. Theresa May's government has tried to be quick to shape its regulatory approach to the needs of the insurance market - see, for example, its move to make it easier to underwrite insurance linked securities (ILS) in London - but political uncertainty following the recent election result, and the pressures on the government to negotiate the terms of Brexit, is likely to impact on future initiatives.

Nick Bradley is an insurance law expert at Pinsent Masons, the law firm behind Out-Law.com.

The Business Continuity Institute

Local authorities in the UK perceive themselves to be vulnerable in the face of cyber attacks, particularly in the wake of the recent ransomware attack on the NHS, with just over half (53%) of local authorities claiming they are prepared to deal with a cyber attack, according to a new study carried out by PwC.

While the latest PwC Global CEO survey found that 76% of UK CEOs are concerned about cyber threats, The Local State We’re In revealed that only 35% of local authority leaders are confident that their staff are well equipped to deal with cyber threats. Demonstrating how real those threats are, almost all (97%) of UK CEOs surveyed say they are currently addressing cyber breaches affecting business information or critical systems.

A parallel study of consumers, which asked about the performance of their local authority, found that only a third (34%) of respondents trusted their council to manage and share their data and information appropriately while there was a growing appetite for council services to be available online.

The research also surveyed councils’ confidence in their ability to maintain existing levels of local service delivery. While the majority of councils (68%) were confident about maintaining service delivery over the next 12 months, a mere 1 in 6 (16%) believed they could make necessary cost savings while maintaining existing levels of services over the next five years.

Commenting on the findings overall, Jonathan House, PwC partner said: “As councils look ahead to the future there will be new risks to manage, from the shift away from the uncertainties of grant funding, to an ever more demanding public. The recent ransomware attacks, and other high-profile incidents impacting them show some of these challenges.

“However councils have proved before their resilience and ability to deal with any challenge they are faced with. The survey data suggest that Councils have taken cost out of their operations - now the challenge is to manage and grow their capabilities - to utilise technology as a force for growth and to deliver citizens’ expectations of a digital organisation.”

The Business Continuity Institute

A consequence of Brexit is that two European Union agencies currently hosted by the United Kingdom will need to be relocated elsewhere in the EU once the UK is no longer a member. In the next few years, both the European Medicines Agency (EMA) and the European Banking Authority (EBA) will need to find a new home, with 27 countries all vying for the privilege.

The European Council has drawn up a list of six essential criteria that any country considering hosting these agencies must meet, and, in recognition of the role that business continuity plays in enabling stability, and helping organizations to remain operational despite disruptive circumstances, this has been chosen as one of the criterion.

According to the procedure document published by the European Council, "This criterion is relevant given the critical nature of the services provided by the agencies and the need therefore to ensure continued functionality at the existing high level."

"It concerns amongst other things the ability to allow the agencies to maintain and attract highly qualified staff from the relevant sectors, notably in case not all current staff should choose to relocate. Furthermore, it concerns the capacity to ensure a smooth transition to the new locations and hence to guarantee the business continuity of the agencies which should remain operational during the transition."

All member states now have until the end of July to submit their bids and prove their business continuity capability, with a final decision to be taken in November.

When a crisis hits or your business is disrupted due to any unexpected event, the media will come a-knockin’. That’s why it’s so important to have a detailed, quality business continuity plan in place and to understand the role that the media play in the public’s perception of not only the crisis itself but how your organization handles it.


Making the media your ally is important in the immediate aftermath of a crisis or business disruption. The sooner you can respond with an official statement, the better off you’ll be, but the key with media is transparency. Your organization’s reputation is fragile in these moments and the public is quick to demand an honest, transparent response.

Remember, it is the media’s job to find the truth, so make their life and your recovery easier by being honest from the very beginning.



What Business Owners Need to Know as Governments Outsource Code Enforcement

Companies across virtually every industry are experiencing a rapid increase in regulation. Naturally, regulatory agencies are having a hard time keeping up with enforcement. That being the case, some state and local governments are turning to private companies and outsourcing enforcement. Compliance is always in a company’s best interest, but when regulators are able to spread the work around, any violations may be unearthed sooner.

Truck drivers delivering in Alabama a few years ago reported an uptick in code enforcement. Not only were they getting stopped along their routes across the state, they also were getting fined for not having the proper licenses to operate there.

But the inspectors knocking on their cabs weren’t employees of the state. They were hired guns, working for a third-party administrator brought on to enforce licensing laws within Alabama, where many jurisdictions require licenses for delivery companies, trucking services and other businesses that simply drive through the jurisdiction.

Like regulators in Alabama, state and local governments across the country are outsourcing their code enforcement operations, turning to private companies to boost efficiencies and improve compliance.



The recent IT outage at British Airways has been blamed on a power supply failure at the company’s data center, causing hundreds of flights to be delayed or canceled and affecting as many as 75,000 customers.

The outage should have been mitigated by backup generators and fail-safe mechanisms, but these appear to have been interconnected with the failed power supply, causing a system-wide shutdown which could end up costing the company up to $100 million.

This incident highlights the need for businesses to maintain effective backup and disaster recovery (BDR) technologies and processes, as IT systems and data have become mission-critical assets in virtually every industry today.



Thursday, 29 June 2017 15:23

The Seven Deadly Sins of BDR

Let’s face it, cyber-crime is a very real threat globally in today’s working world. From small businesses to large corporations, the risk is real and the impact can be great. Look no further than the latest WannaCry attack that has impacted more than 230,000 victims in over 150 countries since it began. The malware locked up the files in organizations as sensitive as hospitals and has shone a blindingly bright spotlight on the vulnerability in our digital security systems.

So the question moves from “well what if?” to “how do I prevent this when?” As the probability of cyber-attack increases, how do you keep your business safe? Here are a few key things to implement.



The Business Continuity Institute

We have just published the latest version of our Cyber Resilience Report and one of the conclusions of the report was that business continuity professionals need to collaborate more with their cyber/information security colleagues. The report noted that if expertise and resources are pooled then resilience can be built in a much more coordinated way. That seems eminently sensible.

Going beyond just IT, in my own foreword within the report I mentioned that cooperation is key to building cyber and organizational resilience, and that different disciplines must come together, share intelligence and start speaking the same language if they want to build a safer future for their organizations and communities.

Is that stating the obvious? Is that something that is already happening? The BCM Futures Report we published last year along with PwC showed that 90% of business leaders believe that resilience is greater when functions such as risk management, business continuity, ITDR and security are joined up, but only 37% believe that these areas are appropriately joined up at the moment. That’s a significant gap between the two, a gap that we all need to put more effort into reducing.

When devising your business continuity programme, do you engage with the IT department on issues relating to cyber security? Do you work with facilities management on the response to your building being out of action? Do you engage with the security department on your response to a terrorist incident? Do you talk to your communications department on reputational issues? There is so much crossover in the work of a business continuity professional, that we need to make that crossover is being addressed. Otherwise it could lead to duplication of effort, or incomplete response plans.

Our current research project on megatrends looks at this issue in further detail, asking those working in the industry whether the different departments collaborate on both preparing for potential threats and responding to those threats materialising. From experience, and from listening to people within the industry, I very much get the impression that silos still exist, management disciplines still work in isolation, and lots more needs to be done. The initial responses to the megatrends survey seem to be quite mixed so far, and perhaps this is a fair reflection of the profession.

My challenge to those people working in the industry is to make sure you are engaging with the other management disciplines on a regular basis to ensure you are all coordinated, and are working together to improve the overall resiliency of the organization. The BCM Futures Report I mentioned earlier showed that about half of business continuity professionals already see this has becoming more important in the future, but I think we need to start increasing that percentage.

As an Institute, we need to do our bit too, so my challenge to us is to engage more with other professional associations working in the resilience space, and build relationships with these organizations from across the world. By working in partnership with others it will enable us to provide those in the resilience community with access to the right training, education and thought leadership.

As always, I would welcome your feedback. Are we already doing enough? Can we, or should we, be doing more? Please do share your thoughts.

David Thorp
Executive Director of the Business Continuity Institute

Another global ransomware attack, dubbed Petya, has disrupted operations at major firms across Europe and the United States.

More than 100 companies and organizations across various industries were affected, including shipping and transport firm AP Moller-Maersk, advertising firm WPP, law firm DLA Piper, Russian steel and oil firms Evraz and Rosneft, French construction materials company Saint-Gobain, food company Mondelez, drug giant Merck & Co, and Pennsylvania healthcare systems provider Heritage Valley Health System.

Today’s Insurance Information Institute Daily, via The Wall Street Journal, reports that the attack has exposed previously unknown weaknesses in computer systems widely used in the West.

The U.S. cyber insurance market grew by 35 percent from 2015 to 2016, based on recent reports.



If you want to find major emitters of global carbon dioxide, look no further than your city’s skyline. Buildings account for more than one-third of all final energy consumption and half of global electricity use. And they’re responsible for approximately one-third of global carbon emissions.

According to the International Energy Agency, energy consumption in buildings needs to be reduced by 80 % by 2050 if we want to limit the world’s temperature rise to under 2 °C. But now there’s a solution to making our building stock more energy-efficient. Here’s introducing the new ISO 52000 series of standards!

With ISO 52000-1, Energy performance of buildings – Overarching EPB assessment – Part 1: General framework and procedures, as its leading document, the ISO 52000 family will accelerate energy efficiency in the world’s building market. From heating, cooling, ventilation and smart controls, to energy-using or -producing appliances, the series will help architects, engineers and regulators assess the energy performance of new and existing buildings in a holistic way – without overheating budgets – as the temperature rises.



An email provider being used by the perpetrators of a global ransomware attack today shut off the hackers’ access to the account, blocking the main avenue by which victims could regain access to their files.

Today’s attack marked the second time in as many months that hackers have launched sophisticated, international ransomware campaigns based on EternalBlue, an exploit purportedly stolen last year from the National Security Agency and leaked to the public.

The German firm Posteo published a blog entry this afternoon announcing its security specialists had identified one of their accounts which was being used by the hackers to collect on $300 (USD) ransom demands from each victim.



The security industry has an accountability crisis. It's time to talk about it, then fix it. Whenever a massive cyber attack occurs inevitably a chorus of voices rises to blame the victims.  WannaCry on 5/12 and Petya on 6/27 yet again kicked off the familiar refrains of:

“If users didn’t click on stuff they shouldn’t….”

“If they patched they wouldn’t be down….”

“This is what happens when security isn’t a priority….”

“Now maybe someone will care about security…”

I have yet to meet a single user that clicked a malicious link intentionally – beyond security researchers and malware analysts that is. I have yet to meet anyone that delights in not patching as a badge of honor. There are great reasons not to patch, and terrible reasons not to patch. As always context and situation matter.



More than ever, your users are the weak link in your network security. Mitigating insider threats isn’t just about thwarting the malicious action of a disgruntled employee; a careless insider can also cause catastrophic damage. If you are not already doing so, you need to train employees in your policies and best practices. Employees that have been conditioned to remain vigilant –  keeping security in mind during all activities – are far less likely to pose an insider threat. This method of mitigating insider threats is just one of the ways to protect your business.

First, let’s establish a simple definition of an insider threat as we discuss it in this article: an insider threat is a threat to a network or computer system that originates from a person with authorized system access. Insider threats are sometimes called insider risks or insider attacks.



The Business Continuity Institute

Despite ransomware being around for many years, with several high profile organizations suffering the consequences of such an attack, 57% of respondents to a survey carried out by Carbon Black said that WannaCry was their first exposure to how ransomware works.

Ransomware attacks have thrust cyber security onto the global stage in unprecedented fashion, with two recent attacks - WannaCry and NotPetya - rapidly spreading across the world and locking down thousands of networks. Organizations and individuals are now beginning to give greater consideration to how they would react if they were exposed to an attack, or if an organization they dealt with was exposed.

The Ransom-Aware Report noted that, while it’s never a good thing when 150 countries are simultaneously affected by a cyber attack, the increased awareness will only serve to incite positive action. Ransomware is certainly nothing new, but consumers are  increasingly turning to organizations with questions about how they are protecting sensitive data. Organizations, in turn, putting more effort into improving cyber security in order to protect their data and remain operational in the event of an attack.

For many consumers, losing trust in an organization could result in them taking their custom elsewhere. When presented with the statement: 'I would consider leaving my current financial institution / healthcare provider / retailer if my sensitive information was taken hostage by ransomware,' the study found that 72% of consumers said they would consider leaving their financial institution; 68% of consumers said they would consider leaving their healthcare provider; and 70% of consumers said they would consider leaving their retailer.

When respondents were asked if they would personally be willing to pay ransom money if their own computer and files were encrypted by ransomware, it was close to a dead heat with 52% of respondents saying they would pay and 48% saying they would not. Of the 52% who said they would pay: 12% said they would pay $500 or more, 29% said they would pay between $100 and $500, while 59% said they would pay less than $100 to get their data back.

The Business Continuity Institute's latest Cyber Resilience Report showed that two-thirds of organizations had experienced a cyber security incident during the previous year. With consumers giving a lot more attention to how organizations are responding to those incidents, it is essential that organizations have plans in place to respond effectively and prevent data being lost.

The Business Continuity Institute

On the day that the Business Continuity Institute launched its latest Cyber Resilience Report, the importance of ensuring our organizations are prepared for a cyber security incident has once again been demonstrated as a new ransomware attack is causing turmoil across the world.

The attack, dubbed NotPetya due to its similarities to a previous virus called Petya, has resulted in organizations worldwide having their data encrypted, with a demand made for the equivalent of about $300 to be paid in Bitcoin.

NotPetya uses the same exploit that allowed WannaCry to spread so rapidly, but is thought to have found additional ways to infect new systems. It is not yet known how computers originally became infected, but it does not appear to be via email.

This particular attack was first reported in Ukraine where the state power company and Kiev's main airport were both affected, but it has now spread to many other countries including the US, UK, France, Russia and India.

Business continuity can be key to minimising the impact of such an attack and can make a real difference during any kind of emergency, crisis or disruption. It is what makes an organization resilient, ready to respond and carry on, even amid difficult circumstances. Yet business continuity cannot be improvised. It requires specialised and trained staff as well, as the support of everyone within an organization.

Having specialised and trained business continuity staff with the ability and resources to develop, implement and maintain a business continuity plan, will help organizations identify the risks they face and key operational areas that need to be prioritised during a crisis.

"We need to learn from these experiences," said David Thorp, Executive Director at the BCI. "It is clear that the cyber threat is not going away any time soon, so organizations must do more to make sure they can respond to them effectively and prevent them from becoming a crisis."

The Business Continuity Institute

With phishing and social engineering maintaining their position as the top driver of cyber disruptions, there is a need for a stronger cyber resilience culture across organizations, and a focus on the human aspects of the threat.

This is one of the key findings of the Cyber Resilience Report, published today by the Business Continuity Institute, the world’s leading Institute for continuity and resilience, in collaboration with Sungard Availability Services ® (Sungard AS), a leading provider of information availability through managed IT, cloud and recovery services.

With the WannaCry ransomware attack still fresh in our minds, it is clear that the cyber threat is very real with this one attack affecting almost a quarter of a million computers across 150 countries. It is also clear that business continuity plays a key role in responding to an incident, and ensuring that the organization is able to manage through any disruption and so prevent it from becoming a crisis.

The Cyber Resilience Report found that nearly two-thirds of respondents (64%) to the global survey had experienced at least one cyber disruption during the previous 12 months, while almost 1 in 6 (15%) had experienced at least 10. Of those who had experienced a cyber disruption, over half (57%) revealed that phishing or social engineering had been one of the causes, demonstrating the need for users to be better educated about the threat and the role they can play in helping to prevent an incident occurring.

The study also found that:

  • A third of respondents (33%) suffered disruptions totalling more than €50,000, while more than 1 in 10 (13%) experienced losses in excess of €250,000.
  • 1 in 6 respondents (16%) reported a single incident resulting in losses of more than €50,000.
  • 1 in 5 respondents working for an SME (18%) reported cumulative losses of more than €50,000. These are significant losses considering 40% of SMEs involved in this study reported an annual turnover of less than €1 million.
  • Phishing and social engineering are the top cause of cyber disruption, with over half of those who experienced a disruption (57%) citing this as a cause.
  • 87% of respondents reported having business continuity arrangements in place to respond to cyber incidents, indicating that it is now widely accepted as playing a key role in helping to build cyber resilience.
  • 67% of respondents stated that their organization takes over one hour to respond to a cyber incident, while 16% stated that it can take over four hours.

The number of respondents reporting top management commitment to implementing the right solutions to the cyber threat increased to 60%, and this is likely due to a number of factors such as the intense media coverage of cyber security incidents, and the impending European Union General Data Protection Regulation, which is due to come into force in less than a year and will have an impact on any organization that holds data on EU citizens.

David Thorp, Executive Director at the BCI, commented: “Cooperation is key to building cyber and organizational resilience. Different disciplines such as business continuity, information security and risk management need to come together, share intelligence and start speaking the same language if they want to build a safer future for their organizations and communities.”

Keith Tilley, EVP and Vice Chair at Sungard Availability Services, said: “Brexit and the pending EU General Data Protection Regulation (GDPR) have thrown up even more questions about data laws and compliance, so data sovereignty is a focus. Companies need to demonstrate a holistic understanding of where their data is hosted, where it’s backed up, moved and recovered, as well as who can see it along the way. The fact that data laws are constantly subject to change, with region and country specific regulation, means a headache for large organizations. Establishing how to meet these regulations, as well as global needs will be vital, as will the ability to handle data access, residency, integrity and security.”

It’s hurricane season again, so hopefully you’ve prepared by updating your disaster recovery and business continuity plans to be ready for any disaster that might come your way.

While the character in our cartoon may have taken his boss’s request the wrong way, he had the right idea: Cover the essentials first. What’s the milk, eggs, and bread for your operation? Identify the data you need to stay up and running, and keep it safe and recoverable.

How solid and actionable will your IT disaster recovery plan be when a natural disaster hits? If you don’t have one or haven’t tested it in a while, it could mean lights out for your mission-critical data.

While we may not be able to exactly predict a hurricane’s course, you should chart your own course of action for when the unexpected happens. For a few more suggestions on how to batten down the hatches and ensure your business is disaster ready, check out this slideshow from CSO.

Hurricane preparedness cartoon

Feel free to share this cartoon, with a link back to this post and this attribution: “Cartoon licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Based on a work at blog.sungardas.com


US-CERT has received multiple reports of Petya ransomware infections occurring in networks in many countries around the world. Ransomware is a type of malicious software that infects a computer and restricts users' access to the infected machine until a ransom is paid to unlock it. Individuals and organizations are discouraged from paying the ransom, as this does not guarantee that access will be restored. Using unpatched and unsupported software may increase the risk of proliferation of cybersecurity threats, such as ransomware.

Petya ransomware encrypts the master boot records of infected Windows computers, making affected machines unusable. Open-source reports indicate that the ransomware exploits vulnerabilities in Server Message Block (SMB). US-CERT encourages users and administrators to review the US-CERT article on the Microsoft SMBv1 Vulnerability and the Microsoft Security Bulletin MS17-010 (link is external). For general advice on how to best protect against ransomware infections, review US-CERT Alert TA16-091A. Please report any ransomware incidents to the Internet Crime Complaint Center (IC3).

Whether your company is already operating in the European Union or has expansion plans there in the future, the upcoming GDPR rules will have a profound impact on how all organizations handle, manage and use consumer data.  Even if your website simply collects data on EU citizens, you must comply or face fines of up to €20 million or 4 percent of global annual turnover. Companies will face five common challenges on the path to compliance.

Though the GDPR implementation date is less than one year away, companies large and small are still struggling to comprehend what must be done to prepare. The General Data Protection Regulation (GDPR) seeks to improve privacy protection for consumers by changing the way businesses collect, use and transfer personal data. Companies purposely were given plenty of warning about the changing policies, but the vague language and complex structural changes mean a complete overhaul to anything remotely related to data in all companies – even for companies outside of the European Union and United Kingdom that do business with the U.K. and EU member states.

There are five main challenges companies need to address immediately in regard to data.

  1. Data Storage and Access
  2. Team Compliance and Training
  3. Data Subject Requests
  4. Data Notifications
  5. Adaptability and Scalability

GDPR does not only affect IT departments; instead, this new regulation reaches far and wide, from human resources to finance and anyone in between who touches data. Companies that address these five challenges will be more ready to face the GDPR’s implementation deadline of May 25, 2018.



You've heard a million times: there’s a robot coming for your job. I’ve written about it before. Several times.

New evidence suggests the reality is no joke.

The New York Times kicked the week off with a poignant story on the subject: “Indian Technology Workers Worry About a Job Threat: Technology.”

It punctuates a story on the raw numbers of tech workers who are losing their work to robots, chatbots, artificial intelligence (AI) and machine learning with some human stories. The article opens with something a good many American workers will relate to: a tale of a laid off tech worker who laments, “I have an 11-year-old child. My wife is not working. How to pay the home loans?”



Instructor and student practicing CPR on mannequin.

We observed CPR and AED Awareness Week at the beginning of June. I recently had the opportunity to sit down with Stacy Thorne, a health scientist in the Office of Smoking and Health, who is also a certified first aid, CPR and AED instructor.

Stacy Thorne, PhD, MPH, MCHES

Stacy has a history of involvement in emergency response and preparedness activities at CDC. She is part of the building evacuation team; a group of employees who make sure that staff gets out of the building in case of a fire; or shelters in place during a tornado. When she learned CDC offered CPR and AED training classes to employees, she couldn’t think of a better way to continue volunteering, while helping people prepare for emergencies.

Stacy became a CPR/AED instructor in 2012. She felt these were important skills to have and wanted to stay up-to-date with the latest guidelines. She said, “You have to get recertified every two years, so if I was going to have to take the class anyway why not teach and make sure other people have the skills to save a life.”

Practice makes perfect

Stacy teaches participants first aid, CPR, and AED skills and gives them an opportunity to practice their skills and make sure they are doing them correctly. The class covers first aid for a wide-variety of emergency situations, including stroke, heart attack, diabetes and heat exhaustion. Participants learn how to:

  • Administer CPR, including the number of chest compressions and the number and timing of rescue breaths
  • Use an Automated External Defibrillator, more commonly referred to as an AED, which can restore a regular heart rhythm during sudden cardiac arrest.
  • Splint a broken bone, administer an epinephrine pen for allergic reactions, and bandage cuts and wounds

In order to receive their certification, all participants must complete a skills test where they demonstrate that they can complete these life-saving skills in a series of scenarios.

Lifesaving skills in actionCardiopulmonary resuscitation, commonly known as CPR, can save a life when someone’s breathing or heartbeat has stopped. CPR can keep blood flowing to deliver oxygen to the brain and other vital organs until help arrives and a normal heart rhythm can be restored.

Stacy shared, “The most rewarding part of teaching is meeting the different people who come to take these classes and hearing the stories of how they have used their skills.” One of her students recalled how she used her CPR skills to save someone while she was out shopping. Her instincts kicked in and when she was able to get the person breathing again the people watching applauded.

Another student reflected, “While I hope I never am in a situation where I need to perform CPR, the notion that I am now equipped with these life-saving skills is reassuring and helps me feel prepared if I should find myself in that scenario.” Stories like these show how important it is for everyone to be trained in first aid, CPR, and how to use an AED. You can spend six hours in training, and walk out with a certification that can save someone’s life.

Always on alert

As the mother of a 6-year old daughter, Stacy is constantly on alert for situations where she might need to use her skills. The closest she has come to using her skills was when her daughter was eating goldfish crackers while laying down and started gagging; she was at the ready to perform the Heimlich maneuver. Her role as an instructor made Stacy feel confident that she could use her first aid, CPR, and AED skills in an emergency.


Posted on by Suzie Heitfeld, Health Communications Specialist, Office of Public Health Preparedness and Response

Tags , , ,

Tuesday, 27 June 2017 14:36

CDC: Teaching skills that save lives

WannaCry has hit again. This recent attack involved a Honda plant in Japan, shutting down production. As Nick Bilogorskiy, senior director of Threat Operations with Cyphort, told me in an email comment:

Automakers are especially vulnerable to network worms like WannaCry because they often use computers with older versions of Windows and those are vulnerable to security flaws. Unlike other businesses such as banks, automakers do not upgrade their factory floor hardware or software aggressively and may get behind in installing patches.

He went on to explain how devastating these attacks can be to an industrial site. Once a machine is infected, you have to decrypt files, power down all the machines so nothing else gets infected, and then re-image or re-install all infected machines, as that is the only safe method to avoid any back doors that have been dropped by WannaCry. Finally, you need to locate necessary backups and restore data from them and reset all your systems to pre-WannaCry state, and test that your applications are working as intended.



(TNS) - Fire Chief Steve Achilles acknowledged many city residents might not know Portsmouth's (N.H.) hazard mitigation plan exists.

"It's the kind of thing that they might never see, but people can take comfort knowing that we're thinking about these things," Achilles said this week.

City officials recently released the 2017 draft update of the plan, which was put together by several city officials, including Achilles, Deputy City Manager Nancy Colbert Puff and other fire, planning and Public Works staff.

"It's a document that the city has had for as long as I've been with the Fire Department and it gets updated every five years," Achilles said. "It's looking at how to reduce and mitigate hazards ahead of time to minimize the impact of natural disasters."

One key part of the plan is to identify what natural hazards Portsmouth could face, he said.



Cyber security software vendor Symantec today emerged as the only known western technology company to publicly refuse Russian government access to source code for its security products.

IBM, Cisco, Germany's SAP, Hewlett Packard Enterprise and McAfee are among the firms that allowed Russia to conduct source code reviews of products, including firewalls, anti-virus applications and other encrypted software, according to a new investigative report from Reuters.

The reviews – intended to protect Russia against cyber espionage – are conducted by the country’s Federal Service for Technical and Export Control (FSTEC), and the Federal Security Service (FSB), successor to the KGB and the agency blamed for attacking the 2016 U.S. Presidential election.



The enterprise has made great strides in curbing its appetite for energy over the past decade, but will this ultimately be a losing battle as demand for data continues to rise?

According to a recent report from the Lawrence Berkeley National Laboratory, the number of data centers coming online has seen a dramatic uptick in the past few years as organizations struggle to meet the always-on demands of an increasingly connected population. But the good news is that due to virtualization, low-power/high-density architectures and other developments, actual energy consumption has been flat. This is in stark contrast to the first decade of the new century, which saw energy demand jumping as high as 90 percent per year.

Still, leaders in the data center industry are concerned that the big gains in energy efficiency are over but the relentless demand for data, fueled in part by rapidly falling costs that are themselves the result of more efficient infrastructure, will put the industry on the fast track to dramatically higher consumption in relatively short order. At a meeting sponsored by DCD Energy this week, Donald Paul, of the University of Southern California’s Energy Institute, noted that once data centers approach a PUE of 1.0, there are no more gains to be had, since you cannot achieve more than 100 percent efficiency. And programs that encourage enterprises to reduce demands on the local grid also encourage the use of mostly diesel-power backup systems.



Many business continuity professionals can attest to the tension that often occurs between the business and IT when it comes to recovery capabilities. For example, Company X recently implemented a business continuity program, including determining recovery time objectives (RTOs) for key business processes. Like all well-established business continuity programs, the business impact analysis (BIA) considered the loss of technology and helped the company develop recommended recovery time (and recovery point) objectives for technology resources. The business documented and presented these RTOs to management following the initial BIA, but never followed up with IT to ensure that the capabilities could be met.

Meanwhile, IT leveraged its own application/system list and related recovery information to prioritize applications for recovery and drive the implementation of a disaster recovery solution that was cost-effective and aligned with IT’s conclusions of business requirements for recovery (created from data outside the BIA). Both the business and IT feel confident in their work; yet, neither have communicated with the other. Given that the groups have not undergone a joint exercise (or actual disruption), neither group is aware of the underlying gap: Recovery priorities and strategies are misaligned between the business and IT.



The Business Continuity Institute

Building resilience by improving cyber security, published by the Business Continuity Institute during Business Continuity Awareness Week, revealed that users are often choosing weak passwords and so leaving their IT networks vulnerable, and this vulnerability has now been realised at the UK Houses of Parliament. Over the weekend, Parliament experienced what was described as a sustained and determined cyber attack that forced remote access to be restricted for Members of both Houses, as well as their aides.

A senior spokesperson for Parliament commented: "We have discovered unauthorised attempts to access accounts of parliamentary networks users and are investigating this ongoing incident, working closely with the National Cyber Security Centre. Parliament has robust measures in place to protect all of our accounts and systems, and we are taking the necessary steps to protect and secure our network."

It was reported that the attack, which began last Friday, was specifically trying to identify weak passwords and gain access to users' email accounts. Ultimately this was successful with less than 1% of accounts, but this still amounts to about 90 people, and potentially results in sensitive data being exposed.

International Trade Secretary Liam Fox said: "We have seen reports in the last few days of even cabinet ministers' passwords being for sale online. We know that our public services are attacked so it is not at all surprising that there should be an attempt to hack into parliamentary emails. And it's a warning to everybody, whether they are in Parliament or elsewhere, that they need to do everything possible to maintain their own cyber security."

While the restriction of remote access seems to have abruptly and effectively ended the attack, it left many Parliamentarians and their staff without access to their emails over the weekend, a time when many of them attempt to catch up with constituency work.

The report published by the BCI highlighted several ways in which users can take responsibility for helping to improve cyber security, and this included the use of strong passwords that cannot easily be hacked or guessed. By doing so it means that everyone can play their part in building a resilient organization.

Jargon crops up everywhere, and business continuity is no exception. RTO, RPO, BIA, and others are often sprinkled liberally into conversations, plans, and reports.

Sometimes expanding the abbreviation makes things clearer to the uninitiated: for example, the terms “recovery time objective” (RTO) for an IT system and “business impact analysis” for BC planning give some hint of what lies behind them.

But what about “recovery point objective” (RPO), also one of the commonest terms used in defining a suitable disaster recovery/business continuity plan? Would we be better off if we banned the use of such jargon?

Banning probably wouldn’t work. For one thing, it would be the curtailing of free speech, and for another, like weeds, jargon would spring up again anyway. We need a better way of managing business continuity jargon, recognizing that it also has its uses.



We all want to know something others don’t know. People have long sought “local knowledge,” “the inside scoop” or “a heads up” – the restaurant not in the guidebook, the real version of the story, or some advanced warning. What they really want is an advantage over common knowledge – and the unique information source that delivers it. They’re looking for alternative data – or “alt-data.”

From the information age where everyone took advantage of easy access to information, we are now entering an age where everyone seeks alternatives: new sources of information and innovative ways of deriving unique insights.  This is the “Age of Alt.”

We know that business leaders want to better leverage data and analytics in their decision-making. But more importantly most decision-makers want to supplement their own data with external data; 81% tell us they want to expand their ability to source new external data.  Demand for data is exploding.



Many technologies are billed as hot, exciting and revolutionary. But which ones are really deserving of that moniker? Which ones are destined to change — or are changing — the storage universe?

Enterprise Storage Forum asked the experts.



Friday, 23 June 2017 15:35

5 Hot Storage Technologies to Watch

What comes to mind when you hear the word “compliance”? Do you shiver, sigh, break out into hives, or all three? Believe it or not, your compliance colleagues are crucial to your social marketing success. This is especially true for marketers in regulated spaces such as financial services, healthcare, and pharmaceuticals. I can share from personal experience that my social marketing success at American Express was in part due to the relationships I fostered with compliance, legal, and even outside legal counsel — in fact, I’m still in touch with those former colleagues. Given the importance of breaking down the marketing compliance silo, I partnered with my colleague Nick Hayes on a new report, . And though the intention of this report is to help marketers in regulated industries, Nick and I both agree that all marketers can benefit from it.



We all make mistakes, and CCOs are no exception. While CCOs are a creative and dedicated bunch, they are often susceptible to these five common mistakes. Probably unsurprisingly, the cure for these ills is more due diligence and more relationship building.

Chief Compliance Officers are fallible – I know that is not a controversial statement. To err is human, and CCOs are members of the human species.

With the enormous expectations placed on CCOs’ shoulders, they are bound to make some mistakes. I have seen CCOs who have run into difficulties, and occasionally they have contributed to the problem through their own behaviors.

I thought I would identify some of the common mistakes I have seen. It is hard to generalize, but I have observed some common themes.



The Business Continuity Institute

One in ten small business owners and employees are regularly putting the security of their data at risk by sharing confidential files on personal devices, or sending documents to personal rather than work emails. This demonstrates a significant lapse in data security among the UK’s five million plus small businesses.

The study by Reckon also found that a quarter of small business owners (25%) and their teams save documents onto their desktops rather than a central server. This also means there is less likelihood of the data being backed up, so should a computer failure occur then the data could be lost. These statistics were just as prevalent in larger SMEs, those with a turnover of £10 million or more, as the findings showed that the same 10% of these larger businesses sent documents to personal devices and a third saved documents on desktops rather than central servers.

"We believe the reasons behind these data breaches may include ease of access when working remotely, and keeping documents to hand rather than sorting through mismanaged folders," said Mark Woolley, Commercial Director at Reckon.

Sending and saving documents incorrectly and to personal devices breaches basic data security guidelines and could even put employers and employees at risk of breaching data protection laws. Such practices also place confidential information at risk of hacks or unauthorised use, and also mean that employers cannot provide complete audit trails of documents within their own business.

It’s concerning that so many SMEs in the UK are ignoring basic data protection rules. The findings are especially worrying where SME owners are involved, as they are placing their own organization’s sensitive information at risk. Incorrectly managing data and information in this way can pose financial, reputational and security issues to a business, something that no business owner wants to have to deal with.

Cyber security is as much of an issue for SMEs as it is for larger organizations according to the Business Continuity Institute's latest Horizon Scan Report which showed that organizations of all sizes share the same concerns. A global survey identified the top three concerns for both SMEs and large organizations as cyber attack, data breach and unplanned network outage.

“Bad habits can easily stick, particularly amongst teams within businesses where there aren’t clear policies around data security,” added Mark Woolley. “I’d urge new businesses to set guidelines around working with documents and emails at the outset in order to give themselves a head start when it comes to keeping information safe. Businesses should also consider that new legislation such as the General Data Production Regulation will incorporate additional data security into law, making adhering to basic practices of vital importance."

The Business Continuity Institute

Cyber attackers are relying more than ever on exploiting people instead of software flaws to install malware, steal credentials/confidential information, and transfer funds. A study by Proofpoint found that more than 90% of malicious email messages featuring nefarious URLs led users to credential phishing pages, and almost all (99%) email-based financial fraud attacks relied on human clicks rather than automated exploits to install malware.

The Human Factor Report found that business email compromise (BEC) attack message volume rose from 1% in 2015 to 42% by the end of 2016 relative to emails bearing banking Trojans. BEC attacks, which have cost organizations more than $5 billion worldwide, use malware-free messages to trick recipients into sending confidential information or funds to cyber criminals. BEC is the fastest growing category of email-based attacks.

“Accelerating a shift that began in 2015, cyber criminals are aggressively using attacks that depend on clicks by humans rather than vulnerable software exploits - tricking victims into carrying out the attack themselves,” said Kevin Epstein, vice president of Proofpoint’s Threat Operations Center. “It’s critical that organizations deploy advanced protection that stops attackers before they have a chance to reach potential victims. The earlier in the attack chain you can detect malicious content, the easier it is to block, contain, and resolve.”

Someone will always click, and fast. Nearly 90% of clicks on malicious URLs occur within the first 24 hours of delivery with 25% of those occurring in just ten minutes, and nearly 50% of clicks occur within an hour. The median time-to-click (the time between arrival and click) is shortest during business hours from 8am to 3pm EDT in the US and Canada, a pattern that generally holds for the UK and Europe as well.

Watch your inbox closely on Thursdays. Malicious email attachment message volume spikes more than 38% on Thursdays over the average weekday volume. Ransomware attackers in particular favor sending malicious messages Tuesday through Thursday. On the other hand, Wednesday is the peak day for banking Trojans. Point-of-sale (POS) campaigns are sent almost exclusively on Thursday and Friday, while keyloggers and backdoors favour Mondays.

Attackers understand email habits and send most email messages in the 4-5 hours after the start of the business day, peaking around lunchtime. Users in the US, Canada, and Australia tend to do most of their clicking during this time period, while French clicking peaks around 1pm. Swiss and German users don’t wait for lunch to click, their clicks peak in the first hours of the working day. UK workers pace their clicking evenly over the course of the day, with a clear drop in activity after 2pm.

The Business Continuity Institute

The United Nations Office for Disaster Risk Reduction has claimed that climate change is greatly increasing the likelihood of devastating wildfires, such as the one that burned its way across Portugal last weekend but is now reported to be under control.

More than 60 fires broke out in a densely forested area near the small town of Pedrógão Grande, 200km north-east of Lisbon, killing more than 60 people, in what Portuguese Prime Minister Antonio Costa described as the country’s “greatest human tragedy in living memory."

Dr Robert Glasser, the United Nations Special Representative of the Secretary-General for Disaster Risk Reduction, urged countries to integrate climate change risk in their fire prevention and response planning, commenting that "the fire highlights the urgency of global efforts to reduce greenhouse gases as quickly as possible."

Organizations in regions where wildfires are a possibility need to consider how they would respond to such an incident, or any incident that could result in the loss of facilities, danger to staff, or the evacuation of people from the region. Actions that need to be thought through are how to communicate with staff, or other stakeholders, during the event, primarily to ensure their safety, but also to liaise with them about alternative work arrangements . If facilities have been damaged then they will need to consider where staff can work both in the short-term and the long-term, bearing in mind that staff may not want to work in the short-term as the organization is unlikely to be their top priority.

Adverse weather, which can lead to the conditions that cause and spread wildfires, such as no rainfall, high temperatures and strong winds, featured fifth in the list of concerns that business continuity professionals have, as identified in the Business Continuity Institute's latest Horizon Scan Report. Climate change is not yet considered an issue however, as only 23% of respondents to a global survey considered it necessary to evaluate climate change for its business continuity implications. given this latest statement from UNISDR, perhaps now is the time to start giving it greater consideration.

A new study published in Nature Climate Change found that 30% of the world’s population is currently exposed to potentially deadly heat for 20 days per year or more.

Heavy rainfall due to Tropical Storm Cindy is expected to produce flash flooding across parts of southern Louisiana, Mississippi, Alabama, and the Florida Panhandle, according to the National Hurricane Center (NHC).

Total rain accumulations of 6 to 9 inches with isolated maximum amounts of 12 inches are expected in those areas, the NHC says.

On Tuesday, Alabama Governor Kay Ivey declared a statewide state of emergency in preparation for severe weather and warned residents to be prepared for potential flood conditions.

FEMA flood safety and preparation tips are here.



MSPs know that customers expect both scale and economics when it comes to the cloud.

For most, this means public cloud options like AWS, Google and Azure.

The subtitle for RightScale’s “2017 State of the Cloud Report” says it all: “Public cloud adoption grows as private cloud wanes.”

Public cloud services dominate news cycles for enterprise IT, and on the surface, the numbers seem to align with this narrative: organizations are increasingly leveraging public and hybrid cloud, while private cloud use feels like part of a forgotten era.



Attack sophistication is growing. 20 years ago, social engineering had already made inroads and automated attacks were on the rise, with denial-of-service, browser executable attacks, and techniques for uncovering vulnerabilities in the binary code of applications.

Today, attacks are bigger, faster, and deeper, ranging from blended (cyber-physical) attacks and malicious counterfeit hardware, to entire supply chain compromises and adaptive attacks on critical infrastructure.

Yet in another sense attacks are on a downward trend, possibly giving enterprises and individuals a better chance of protection.



As virtualization becomes the norm, the risk of virtualization should be in the forefront of any business continuity manager’s mind.  We’ve compiled a list of areas of concerns and controls to reference throughout your virtualization transitions.

As organizations adopt and expand the use of cloud computing (e.g., software as a service – SaaS, infrastructure as a service – IaaS), most do not consider the acceptance of virtual infrastructure to be a major risk. Virtualization is the norm, and physical-based servers and storage are the exceptions. Nevertheless, you must consider the risks associated with your virtual environment as part of your overall risk assessment.



Wednesday, 21 June 2017 14:48

The Risk of Virtualization

In the highly competitive market for security tools, many vendors make the misleading claim of having the best of everything, and at this point in time "everything" often refers to data science, machine learning and AI. The result is an arms race of claims about tools that  “automagically” address security problems, according to Forrester Research.

In its recent report "The Top Security Technology Trends to Watch, 2017," the analyst firm called out the battle of the data science algorithms, saying, "When virtually every security vendor makes the claim that they’re using artificial intelligence or machine learning for detection, security decision makers are left shaking their heads, trying to figure out what’s real and what’s not."

Still, decision makers need solutions. And it should be noted that, "data science has been part of cybersecurity for as long as there has been a category called cybersecurity. Machine learning and artificial intelligence do have roles to play in security, but they are not a panacea for the prevention of all cyberattacks," the report said.



The Business Continuity Institute

The average cost of a data breach is $3.62 million globally, a 10% decrease from the 2016 results, according to IBM's latest Cost of Data Breach Study, conducted in collaboration with the Ponemon Institute. This is the first time since the global study was created that there has been an overall decrease in the cost. On average, these data breaches cost companies $141 per lost or stolen record.

For the third year in a row, the study also found that having an Incident Response Team in place significantly reduced the cost of a data breach, saving more than $19 per lost or stolen record. The speed at which a breach can be identified and contained is in large part due to the use of an IRT and having a formal Incident Response Plan.

The Business Continuity Institute's latest Horizon Scan Report identified data breaches as the number two concern for business continuity and resilience professionals, with 81% of respondents to a global survey expressing concern about the prospect of a breach occurring. It cannot be emphasised enough therefore, just how important it is for organizations to have plans in place to respond to such an incident and help lessen its impact.

According to the IBM study, how quickly an organization can contain a data breach has a direct impact on financial consequences. The cost of a data breach was nearly $1 million lower on average for organizations that were able to contain a data breach in less than thirty days compared to those that took longer than 30 days. Speed of response will be increasingly critical as GDPR is implemented in May 2018, which will require organizations doing business in Europe to report data breaches within 72 hours or risk facing fines of up to 4% of their global annual turnover.

"New regulatory requirements like GDPR in Europe pose a challenge and an opportunity for businesses seeking to better manage their response to data breaches," said Wendi Whitmore, Global Lead, IBM X-Force Incident Response & Intelligence Services (IRIS). "Quickly identifying what has happened, what the attacker has access to, and how to contain and remove their access is more important than ever. With that in mind, having a comprehensive incident response plan in place is critical, so when an organization experiences an incident, they can respond quickly and effectively."

While the global study revealed that the overall cost of a data breach decreased to $3.62 million, many regions still experienced an increased cost of a data breach. For example, the cost of a data breach in the US was $7.35 million, a 5% increase compared to last year. However, the US wasn't the only country to experience increased costs in 2017. Organizations in the Middle East, Japan, South Africa, and India all experienced increased costs in 2017 compared to the four-year average costs. Germany, France, Italy and the UK all experienced significant decreases compared to the four-year average costs. Australia, Canada and Brazil also experienced decreased costs compared to the four-year average cost of a data breach.

When compared to other regions, US organizations experienced the most expensive data breaches in the 2017 report. In the Middle East, organizations saw the second highest average cost of a data breach at $4.94 million – more than 10% increase over the previous year. Canada was the third most expensive country for data breaches, costing organizations an average of $4.31 million. In Brazil data breaches were the least expensive overall, costing companies only $1.52 million.

"Data breaches and the implications associated continue to be an unfortunate reality for today's businesses," said Dr. Larry Ponemon. "Year-over-year we see the tremendous cost burden that organizations face following a data breach. Details from the report illustrate factors that impact the cost of a data breach, and as part of an organization's overall security strategy, they should consider these factors as they determine overall security strategy and ongoing investments in technology and services."

The Business Continuity Institute

Why do we have business continuity management programmes? Is it because we want to make sure our organizations have the capability to respond to a disruption? Probably yes! It is common sense that we would want to be prepared for any future crisis.

In some cases however, it is also because there is a legal obligation to do so. Many organizations are tightly regulated depending on what sector they are in or the country they are based, and therefore must have plans in place to deal with certain situations. Furthermore, the rules and regulations that govern us are often being revised, and sometimes it can be difficult to keep up with which ones are applicable.

So how do you know which rules apply to you? The Business Continuity Institute's BCM Legislation, Regulations, Standards and Good Practice publication would be a great place to start.

The BCI does its best to check the validity of the details within this document, but we are reliant on those working in the industry to provide updates. Please help inform our next edition by looking at the current version and advising us of any changes required for your region. If you do come across any inaccuracies then please contact This email address is being protected from spambots. You need JavaScript enabled to view it. to advise him of the required updates.

The Business Continuity Institute

It may not be have been as disruptive or anywhere near as costly as the IT outage that affected BA just a few weeks ago, but many people are still suffering the consequences of an "unforeseen technical fault" that caused Tesco, the world's third most profitable retailer, to cancel a large number of its home deliveries in the UK.

We experienced an unforeseen technical fault which resulted in the forced cancellation of many orders due to a complete system failure. 2/4

— Tesco (@Tesco) June 20, 2017

Many people in the UK have become so reliant on supermarket deliveries that not having to visit the actual store has become a way of life. Having that comfort removed from us is not only a nuisance, it can completely disrupt our busy schedules. Those who had ordered sun-cream for their children to take to school, those who were getting some last minutes orders before heading off to the Glastonbury Festival, those who are simply unable to leave the house, will now have to make alternative arrangements.

Incidents like this aren't rare occurrences. While they may not be commonplace, they occur often enough to warrant featuring in third place in the Business Continuity Institute's latest Horizon Scan Report. Organizations must therefore be prepared to deal with the possibility that one will occur.

For Tesco it could be quite costly, not just in terms of lost revenue, but also in terms of lost reputation as many who had their orders cancelled soon took to Twitter to express their outrage. Some of those people will re-arrange their deliveries, but other will not. Many of them will now shop elsewhere, both on this occasion and in the future if they don't consider Tesco to be reliable. This is why it is important for organizations to make sure they have a plan in place to deal with the consequence of any form of disruptive event.

Imagine going into an outpatient facility for a simple procedure and coming out weeks later confined to a wheelchair. That’s what happened to Mallory Weggemann — who’s now a professional athlete, motivational speaker and writer at The Factory Agency — when she was just 18 years old. How has Mallory overcome adversity and found strength in her disability? Not only personally, but also as a Paralympic swimmer?

In this episode of Resilient, Mallory joins Deloitte Advisory’s Mike Kearney to share her story and discuss why it’s never too late to pick yourself up and make an impact that matters.

“We all have a disability. Everybody has that thing in life that they’re struggling with… We all have to figure out how to navigate through that. How can our disabilities enable us and not disable us?”



Smart cities are the ultimate emerging platform in ways both good and bad. Positives include healthier and happier citizens, more efficient and environmentally responsible communities, and better services to attract and support businesses.

On the flip side, however, smart cities are poorly defined and based on complex technologies,  such as the Internet of Things (IoT), that are just emerging. They demand significant investment and, if projects fail, the host community can be worse off than if the project had not been undertaken in the first place.

So the stakes are high. There was some news on the smart cities front last week. Hitachi Insight Group updated the smart city portfolio that the company included in May 2016. The updates, according to eWeek, are Hitachi Visualization Suite 5.0, Hitachi Smart Camera 200 and Hitachi Digital Evidence Management.



Tuesday, 20 June 2017 14:33

Nothing Small or Easy About Smart Cities

(TNS) - An ammonia leak at Cashmere’s former Tree Top plant led officials to issue a shelter-in-place order for the area for about an hour Sunday afternoon, and to briefly shut down Highway 2.

Chelan County, Wa.,Emergency Management issued the shelter order about 2:50 p.m. for a half-mile radius around the fruit packaging plant, 210 Titchenal Road, and issued the all-clear just after 4 p.m. once the leak was capped.

Washington State Patrol Trooper John Bryant said the leak issued from a 13,000-pound ammonia tank at the Tree Top plant, a former juicing facility which has not operated since the Selah-based company shut it down in 2008.

“As soon as they saw what happened, they attempted to try and vent it, and advised the residents in the area pretty quickly,” Bryant said.



Tuesday, 20 June 2017 14:32

Ammonia Leak Sparks Emergency Response

Emergencies come in many forms: fires, hurricanes, earthquakes, tornadoes, floods, violent storms and even terrorism. In the event of extreme weather or a disaster, would you know what to do to protect your pet?

Many pet owners are unsure of what to do if they’re faced with such a situation. In recognition of National Pet Preparedness Month, here are five steps you can take to keep your pets safe during and after an emergency:

  1. Have a plan – include what you would do if you aren’t home or cannot get to your pet when disaster strikes. You never want to leave a pet behind in an emergency because they, most likely,Pet Preparedness Infographic cannot fend for themselves or may end up getting lost. Find a local pet daycare, a friend, or pet sitter that can get to your pet if you cannot. Make plans ahead of time to evacuate to somewhere that is pet friendly, such as a pet-friendly hotel or a friend or family’s home that is out of the evacuation area.
  2. Make a kit – stock up on food and water. It is crucial that your pet has enough water in an emergency. Never allow your pet to drink tap water immediately following a storm; there could be chemicals and bacteria in tap water so give them bottled water. Also, be sure to stock up on canned food. Don’t forget a can opener, or buy enough pop-top cans to last about a week.
  3. I.C.E. – No, not the frozen kind – it stands for “In Case of Emergency.” If your pet gets lost or runs away during an emergency, have information with you that will help find them, including recent photos and behavioral characteristics or traits. These can help return them safely back to you
  4. Make sure vaccinations are up to date – If your pet needs to stay at a shelter, you will need to have important documents about vaccinations or medications. Make sure their vaccinations are up to date so you don’t have any issues if you have to leave your pet in a safe place.
  5. Have a safe haven – Just like people, pets will become stressed when their safety is at risk. Whether you are waiting out a storm or evacuating to a different area, be sure to bring their favorite toys, always have a leash and collar on hand for their safety, and pack a comfortable bed or cage for proper security. If your pet is prone to anxiety, there are stress-relieving products like a dog anxiety vest or natural stress-relieving medications and sprays that can help comfort them in times of emergency. Ask your veterinarian what would be best for your pet.

Some other things to think about are:

  • Rescue Alert Sticker – Put a rescue alert sticker by your front door to let people know there are pets inside. If you are able to take your pets with you, cross out the sticker and put “evacuated” or another message to let rescue workers know that your pet is safely out of your home.
  • Let pets adjust – Don’t allow your pet to run back into your home or even your neighborhood once you and your family have returned. Your home could be disheveled and things might look different, and these changes can potentially disorient and stress your pet. Keep your pet on a leash and safely ease him/her back home. Make sure they are not eating or picking up anything that could potentially be dangerous, such as downed wires or water that might be contaminated.
  • Microchip your pet – Getting a microchip for your pet could be the difference between keeping them safe and them becoming a stray. Microchips allow veterinarians to scan lost animals to determine their identity so they can be returned home safely. Make sure your microchip is registered and up to date so if your pet gets lost, your information is accessible to anyone who finds your pet.

Resources for Pet Owners

Posted on by Crystal Bruce, Health Communications Specialist, Office of Public Health Preparedness and Response


JEFFERSON CITY, Mo. – Survivors who apply for assistance from the Federal Emergency Management Agency as a result of the federal declaration for flooding from April 28 to May 11, 2017 will receive a letter in the mail from FEMA. The letter will explain the status of their application and how to respond. It is important to read the letter carefully.

Many times applicants need to submit more information for FEMA to continue to process their application.

Examples of missing documentation may include an insurance settlement letter, proof of residence, proof of ownership of the damaged property, and proof that the damaged property was their primary residence at the time of the disaster.

Survivors who have questions about the letter may call the FEMA Helpline at 800-621-3362; go online to www.DisasterAssistance.gov; or visit a disaster recovery center.

To locate the nearest disaster recovery center, they may call the FEMA Helpline; use FEMA app for smart phones; or go online to www.fema.gov/DRC or https://recovery.mo.gov/.

Survivors may appeal FEMA’s decision. For example, if survivors feel the amount or type of assistance is incorrect, they may submit an appeal letter and any documents needed to support their claim, such as a contractor’s estimate for home repairs.

If survivors have insurance, FEMA cannot duplicate insurance payments. However, if they are underinsured they may receive further assistance for unmet needs after insurance claims have been settled.

How to Appeal a FEMA Decision

All appeals must be filed in writing to FEMA. Survivors should explain why they think the decision is incorrect. When submitting the letter, they should include:

  • Full name
  • Date and place of birth
  • Address of the damaged dwelling
  • FEMA registration number

In addition, the letter must either be notarized – if they choose this option, they should include a copy of a state-issued identification card – or include the following statement, “I hereby declare under penalty of perjury that the foregoing is true and correct.” The survivor must sign the letter. 

If someone other than the survivor or the co-applicant is writing the letter, there must be a signed statement affirming that the person may act on their behalf. The survivor should keep a copy of the appeal for their records.

To file an appeal, letters must be postmarked, received by fax, or personally submitted at a disaster recovery center within 60 days of the date on the determination letter.

By mail:

FEMA – Individuals & Households Program
National Processing Service Center
P.O. Box 10055
Hyattsville, MD 20782-7055

By fax:
Attention: FEMA – Individuals & Households Program

If survivors have any questions about submitting insurance documents, proving occupancy or ownership, or anything else about their letter, they may call the FEMA Helpline at 800-621-3362. Those who use 711 or Video Relay Services may call 800-621-3362. Those who use TTY may call 800-462-7585; MO Relay 800-735-2966; CapTel 877-242-2823; Speech to Speech 877-735-7877; VCO 800-735-0135. Operators will be available from 6 a.m. to 10 p.m. seven days a week until further notice.

FEMA and Missouri’s State Emergency Management Agency (SEMA) are committed to ensuring services and assistance are available for people with disabilities or others with access and functional needs. When they register, they should let FEMA staff know that they have a need or a reasonable accommodation request.

The federal disaster declaration covers eligible losses caused by flooding and severe storms between April 28 and May 11, 2017 in these counties: Bollinger, Butler, Carter, Douglas, Dunklin, Franklin, Gasconade, Howell, Jasper, Jefferson, Madison, Maries, McDonald, Newton, Oregon, Osage, Ozark, Pemiscot, Phelps, Pulaski, Reynolds, Ripley, Shannon, St. Louis, Stone, Taney, and Texas.

Monday, 19 June 2017 14:13

Understanding the FEMA Letter

CHICAGO – Summer is finally here, and while that means fun in the sun, it can also bring the threat of dangerous storms. In recognition of Lightning Safety Awareness Week, the Federal Emergency Management Agency’s Region 5 office wants you to learn how to reduce your lightning risk while outdoors.

“If you hear thunder, lightning is close enough to pose an immediate threat,” said FEMA Region V Acting Administrator Janet M. Odeshoo. “Seek shelter as quickly as possible. There is no place outside that is safe when a thunderstorm is in the area.”

Substantial buildings such as offices, schools, and homes would offer good protection. Once inside, stay away from windows and doors and anything that conducts electricity such as corded phones, wiring, plumbing, and anything connected to these. If you are caught outside with no safe shelter anywhere nearby, the following actions may reduce your risk:

  • Never shelter under an isolated tree, tower or utility pole. Lightning tends to strike the taller objects in an area.
  • Immediately get off elevated areas such as hills, mountain ridges or peaks.
  • Immediately get out and away from ponds, lakes and other bodies of water.
  • Stay away from objects that conduct electricity, including wires and fences.
  • Never lie flat on the ground.

The best way to protect yourself against lightning injury or death is to monitor the weather and postpone or cancel outdoor activities when thunderstorms are in the forecast. Lightning can strike from 10 miles away, so if you can hear thunder, you are in danger of being struck by lightning.

For additional information on lightning safety—wherever you may be this summer—visit www.ready.gov/thunderstorms-lightning. You can find more valuable storm safety tips by visiting www.lightningsafety.noaa.gov.  Consider also downloading the free FEMA app, available for your Android, Apple or Blackberry device, so you have the information at your fingertips to prepare for severe weather.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

The Business Continuity Institute

We are used to assessing what the immediate threats are to our organizations as those threats are happening right now. Organizations across the world are suffering from adverse weather, cyber attacks, supply chain failures and technical failures. They may not affect our own organizations straight away, but with the increasing dependence on other organizations in this matter, they probably will do in the near future.

But what about the long-term future? Organizational strategies are often looking beyond the short-term with five-year plans or even ten-year plans in place. So when we consider business continuity and resilience, should we be looking further ahead as well? Should we be assessing what the megatrends are that our organizations need to be preparing for now?

Megatrends are seen as the large social, economic, political, environmental or technological changes that occur over the long-term, changes that have the potential to profoundly shape the way we work and live our lives. Climate change, and everything it entails, is one such megatrend that could, or perhaps already is, having a major impact on our organizations.

The Business Continuity Institute is delighted to be collaborating with Siemens on a new study that will look at how organizations build resilience across the board, and what they think about climate change as one of these megatrends. You can help inform this study by taking a few minutes to complete the survey, and be in with a chance of winning a €100 Bol.com gift card.

This study is primarily looking at responses from the Benelux region, but input would be welcome from elsewhere in order to help make comparisons.

Having workloads distributed across multiple clouds and on-premises is the reality for most enterprise IT today. According to research by Enterprise Strategy Group, 75 percent of current public cloud infrastructure customers use multiple cloud service providers. A multi-cloud approach has a range of benefits, but it also presents significant challenges when it comes to security.

Security in a multi-cloud world looks a lot different than the days of securing virtual machines, HashiCorp co-founder and co-CTO Armon Dadgar said in an interview with ITPro.

“Our view of security is it needs a new approach from what we’re used to,” he said. “Traditionally, if we go back to the VM world, the approach was sort of what we call a castle and moat. You have your four walls of your data center, there’s a single ingress or egress point, and that’s where we’re going to stack all of our security middleware.”



Musings of a Cognitive Risk Manager

To drive change, you need buy-in, and to achieve buy-in, your people need to know the “why” behind the change. This is the premise behind cognitive risk governance, the “designer” of human-centered risk management. James Bone, author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, further explains the cogrisk framework.

In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must re-imagine risk management for the 21st century.  I purposely did not get into the details right away, because it is really important to understand why a thing must change before change can really happen.  In fact, change is almost impossible without understanding why.

Why put on sunscreen if you don’t know that skin cancer is caused by too much exposure to ultraviolet rays from the sun?  We know that drinking and driving is one of the deadly causes of highway fatalities, but we still do it!  Knowing the risk of a thing doesn’t prevent us from taking the chance anyway.  This is why diets are so hard to maintain and habits are so hard to change.  We humans do irrational things for reasons we don’t fully understand.  That is precisely why we need cognitive risk governance.



Every so often it’s good to shake things up. Sometimes the simple act of asking questions about what we do in business continuity and why we do it can give us a fresh point of view and point out areas for improvement.

The venerable business impact analysis (BIA) is a case in point. Do you produce a BIA because it helps you optimise business continuity and its cost-effectiveness?

Or do you have one because the auditors ask for it and it’s part of the process? Adaptive BC challenges business continuity managers on this and several other important points.

It’s a fact that we lose focus at some point while we are working. The loss of focus can range a few seconds to several years.



The Business Continuity Institute

Many organizations don’t devote enough attention to mission-critical applications when creating disaster recovery (DR) plans, and one of the biggest reasons is the 'resiliency perception gap', or the gap between executives’ perceptions of the effectiveness of their resiliency strategies and how successful these plans actually are at protecting against application outages or downtime. This gap can result in lost revenue and damaged brand reputations.

A new Forbes Insights Executive Brief, sponsored by IBM, showed that 80% of respondents fully expect that their disaster recovery plans can run their business in the aftermath of a disruption. Yet this confidence is questionable. Less than a quarter of these same executives say they include all critical applications in their DR strategies, which means 78% of enterprises face unplanned and unnecessary risks for these essential resources.

Business resiliency: now’s the time to transform continuity strategies also noted that gaps exist in management and governance activities, with 61% of executives saying that business continuity, disaster recovery and crisis management are siloed rather than administered as they should be - an interrelated whole.

Many organizations don’t have the means, or the desire, to fully protect critical assets as nearly three-quarters (73%) of surveyed executives pointed to shortfalls in funding and other resources as impediments to covering all critical applications within DR programmes. In addition, another quarter of executives don’t even consider it essential to cover 100% of their critical applications.

Outdated runbooks are common as more than half of enterprises (58%) go almost a year, sometimes longer, between tests of their business continuity and DR plans. Only 28% of companies run assessments monthly. As a result, nearly half of the executives (47%) say that DR drills or actual events showed the runbook was out of sync. Almost half (46%) of the executives surveyed say testing disrupts their organizations, and the cost of running tests keeps another quarter from testing more frequently.

There is often an over-reliance on manual processes as DR strategies aren’t becoming automated as quickly as production processes, leaving nearly a third (31%) of enterprises struggling with manual DR resources. Even many of the more mature organizations have only pockets of automation.

“Clearly, many executives don’t realize the full extent of risks they’re running,” said Bruce Rogers, Chief Insights Officer at Forbes Media. “And tight budgets force many to make trade-offs.”

“Clients today demand IT recovery solutions that are designed for complex hybrid cloud environments to restore their confidence and meet their business needs,” said Chandra Sekhar Pulamarasetti, Co-Founder and CEO of Sanovi Technologies and VP Cloud Resiliency Orchestration Software and Services at IBM. “Cyber attacks and other threats require innovative business resiliency plans that are orchestrated to anticipate problems and reduce risk, cost, and downtime in the process.”

The Business Continuity Institute

By gavnosis (http://www.flickr.com/photos/gavnosis/2548307698/) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Only a month after the WannaCry attack that affected about 250,000 networks across the world, it seems that ransomware is back in the headlines again with an attack on University College London, one of the largest universities in the UK with over ten thousand employees and nearly forty thousand students, and considered to be the seventh best university in the world. The attack affected its internal shared drives, and resulted in several NHS Trusts in the UK shutting down their own servers as a precaution.

UCL first reported the attack at the end of the day on Wednesday with the Information Services Division posting that "UCL is currently experiencing a widespread ransomware attack via email. Ransomware damages files on your computer and on shared drives where you save files. Please do not open any email attachments until we advise you otherwise. To reduce any damage to UCL systems we have stopped all access to all N: and S: drives. Apologies for the obvious inconvenience this will cause."

To help reassure those at the university who rely on access to the shared drives, ISD later added that "We take snapshot backups of all our shared drives and this should protect most data even if it has been encrypted by the malware. Once we are confident the infections have been contained, then we will restore the most recent back up of the file."

Having an effective back-up programme is one of the best ways to protect against the impact of a ransomware attack. If data is backed-up and the organization experiences a ransomware attack then they can isolate the ransomware, clean the network of it, and then restore the data from the back-up. It’s not necessarily an easy process, but it means they don’t lose all their data and they don’t pay a ransom.

Unlike WannaCry which was reported to have infected systems using out of date software, this attack was the result of users clicking on a malicious link. First it was reported to be the result of a phishing email, but later it was confirmed to be the result of users accessing a compromised website. Either way, it is this type of activity that featured so prominently during Business Continuity Awareness Week, with a report published by the Business Continuity Institute demonstrating that each and every one of us can take simple steps to improve cyber security, and one of those steps was to exercise more caution when clicking on links.

"It is encouraging to see that once again the potentially damaging impact of a cyber attack has been prevented by UCL having processes in place to deal with the threat," said David Thorp, Executive Director of the BCI. "This is business continuity in action, and while it may not prevent the disruption in its entirety, it ensures that it does not escalate further into a crisis."

Migrating from one computing platform to another can and should cause pause. It is important to be prudent when deciding whether to migrate, and to which system. Total cost of acquisition (TCA) often becomes a tipping point, but the ongoing total cost of ownership (TCO) should factor into this equation. This is especially true when both the TCA and TCO of IBM Power systems servers are considered. Low-end scale out Power servers are competitively priced and offer a lower TCA against x86 servers running Linux. When integrated database, security, work management, support for multiple operating systems and high-availability resources are factored in the TCO for larger systems, it’s clear that Power systems servers offer the better value.

The Past, Present and Future of POWER

Some organizations consider Power systems servers outdated, as the platform was first introduced in 1979 as the System/38. However, more than 150,000 companies that have embraced this technology consider this to be a benefit rather than a detriment, with the platform representing decades of on-going enhancements. This leads up to current POWER8 processor technology and, as part of an ongoing development roadmap, with POWER9 servers scheduled to be announced in 2017.

The Benefits of IBM i on POWER

Interested in a full run-down of Power systems’ features and benefits?

The following whitepaper explores the ways in which IBM is staying ahead of the game with migration, performance, and cost benefits, all while backed by IBM experts ready to support the next generation of Power systems.

In this whitepaper, you will learn:

  • How IBM is staying relevant with the emerging IT workforce.

  • How IBM I reduces workload when upgrading systems.

  • How TCA, TCO and performance compares with x86.

  • The skills and management tools required to run Power systems and where to find them.

Download Why IBM i on POWER to learn more!

Thursday, 15 June 2017 19:13

Why IBM i on POWER

(TNS) — A Vigo, Ind., County official is seeking a renewed focus on getting the county recertified into a higher level of a national rating system in hopes of lowering costs for county residents required to have flood insurance.

“For the past few years, our office has failed in one main area,” said Jared Bayler, executive director of the Vigo County Area Planning Department. “Due to any number of reasons, we are no longer [getting a discount] in the Community Rating System program.” Bayler became executive director in February.

“What this means is we are failing to provide to Vigo County residents discounts on flood insurance programs that are made available to them,” Bayler said.

The National Flood Insurance Program’s Community Rating System is a voluntary incentive program that recognizes and encourages community floodplain management activities that exceed the program’s minimum requirements and encourages a comprehensive approach to floodplain management.



Like many others, we are trying to wrap our heads around the recent British Airways outage, an event so far-reaching and arguably avoidable that it’s difficult to believe such a thing can happen — yet it did. While , this event provides some good lessons for everyone. It’s a reminder that bad things can happen, even to a good organization. You need to be aware of the risks to your own technology and business and defend against them before they harm your business and your customers.

As a rough estimate, .[i] , to say nothing of the reputational damage and other indirect losses. It might take the airline a few quarters to recover fully. Public memory is short, and the beleaguered traveler is forgiving, but a three-day no-show is extreme. BA execs will get to the root-cause analysis soon, but the event (and historical failures at airlines in general) provides a bonanza of lessons for execs everywhere who want to better equip their organizations to handle such exigencies.

Here’s should do:



Fire safety officials around the world are reinforcing prevention and evacuation guidance to high-rise residents following the deadly 24-story apartment building fire at Grenfell Tower in West London.

So far, at least 17 people are confirmed dead in the fire, while close to 80 are hospitalized. UK prime minister Theresa May has ordered a public inquiry into the blaze. Insurance will play a role in the recovery.

Officials say that while catastrophic fires on the scale of Grenfell Tower are statistically rare, awareness is key.



If you are reading this, you either execute data migration or your projects rely on successful migration of data. Regardless of which, I would like to share some insight that I have gathered helping large projects through their data migration exercises. It is NOT a technical discussion, but rather a conversation focused primarily on the business and functional aspects of the data migration exercise.

Does this sound familiar – “The data migration effort had been running well until User Acceptance Testing (UAT), when the business could not validate the data”. This is one of the most common issues facing data migration and one that can grind any progress to a halt.

How about this – “The data migration team is the last one to be staffed, often well after the project is underway, and requirements and designs are being completed before the data migration team sets foot on the ground.”



The Business Continuity Institute

Most business continuity plans pay scant regard to how people might be feeling in the aftermath of a major disruptive incident and simply assume their willingness and ability to drop everything in order to activate those plans.

This assumption might be valid if the incident in question is limited in scope - such as a building, facilities, IT or supply chain issue - and doesn't result in death, injury or personal hardship. But if it's wider-reaching - for instance extreme weather, earthquake, flood, power failure, civil disturbance, terrorist incident or any of a whole host of potential events that affect the wider community - there's a major problem with it.

The fact is that people are likely to be thinking of themselves, their families and their homes, rather than the organization they work for. In which case, the business continuity plan is likely to rank somewhere near the bottom of the list of things on their minds. And their willingness to drop everything and come to the aid of the organization is, perfectly reasonably, likely to be somewhere between low and zero.

Most people have lives, and responsibilities, outside of work. But it's much easier to simply ignore this important fact when creating our business continuity plans, than to worry too much about it. So that's precisely what many planners do. The trouble with this approach, however, is that whilst our plans might look okay on paper, they could well be doomed to failure from the outset if we actually have to put them into operation.

Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management. You can follow him on Twitter and his blog or link up with him on LinkedIn.

Saturday, 17 June 2017 17:37

BCI: Self, self, self...

The Business Continuity Institute

Employees who become distracted at work are more likely to be the cause of human error and a potential security risk, according to a snapshot poll conducted by Centrify at Infosec Europe.

While more than a third (35%) of survey respondents cite distraction and boredom as the main cause of human error, other causes include heavy workloads (19%), excessive policies and compliance regulations (5%), social media (5%) and password sharing (4%). Poor management is also highlighted by 11% of security professionals, while 8% believe human error is caused by not recognising our data security responsibilities at work.

According to the survey, which examines how human error might lead to data security risks within organisations, over half (57%) believe businesses will eventually trust technology enough to replace employees as a way of avoiding human error in the workplace.

Despite the potential risks of human error at work, however, nearly three-quarters (74%) of respondents feel that it is the responsibility of the employee, rather than technology, to ensure that their company avoids a potential data breach.

This ties in closely with the theme for the recently ended Business Continuity Awareness Week, organized by the Business Continuity Institute, which highlighted that users can do more to play their part in cyber security. A report published during the week revealed six simple ways in which they can do this and this included better password control and more caution when clicking on links.

“It’s interesting that the majority of security professionals we surveyed are confident that businesses will trust technology enough to replace people so that fewer mistakes are made at work, yet on the other hand firmly put the responsibility for data security in the hands of employees rather than technology,” comments Andy Heather, VP and Managing Director, Centrify EMEA.

“It seems that we as employees are both responsible and responsible – so responsible for making mistakes and responsible for avoiding a potential data breach. It shows just how aware we need to be at work about what we do and how we behave when it comes to our work practices in general and our security practices in particular.”

The Business Continuity Institute

Airmic has launched a major new study to determine what resilience looks like in a digitally-transformed business world. According to the association, the digital revolution is fundamentally altering the ways in which organizations develop and execute strategy, which will impact business models and their approach to risk and resilience.

Julia Graham, Airmic's technical director and deputy CEO who is leading the project, said: "The digital revolution is moving at a lightning speed and will not only alter the risks our members have to manage, but also the way they have to manage them. At the moment, we - Airmic and the business world as a whole - do not fully understand this process so this project is about taking a leading role in the debate."

According to Airmic, several leading studies have highlighted the speed and disruptive nature of the digital revolution. KPMG's Now or never: 2016 CEO outlook, for example, warns: "The speed of change will be, quite literally, inhuman, as the advancement of data and analytics and cognitive and machine learning drive forward change more quickly than humans alone could ever achieve."

Airmic's study, Roads to Revolution, will be conducted jointly with Cass Business School and published in 2018. It will build upon Airmic's ground-breaking research, Roads to Ruin (2011), which analysed the common underlying causes of corporate failures, and Roads to Resilience (2014), which analysed the common underlying features of resilient businesses.

"Our previous research established what good and bad looked like in terms of organizational resilience, but little is understood about how this will be affected by the current wave of technological advancement," Graham said. "Through case-studies, focus groups and academic analysis, we will shed light on how organizations are transforming their business models and cultures to ensure resilience and growth in the digital age."

The Business Continuity Institute

Organizations are not doing enough to ensure their travel risk strategies are fit for the 21st century realities of business travel and fulfil their legal duty of care, according to a new report published by Airmic.

Travel risk management notes that business travel has grown by 25% over the last decade with businesses sending employees and other people they are responsible for to a wider range of territories, including high or extreme risk regions. They must be able to respond to the many possible factors that could convert even a low-risk destination into a high risk destination in a matter of hours, e.g. health, safety, security, political or social change, and natural disasters.

Businesses have a legal duty of care to protect their employees – which may include contractors and family members – and yet only 16% of Airmic members surveyed have high confidence in their travel risk management framework. To respond to this increased reliance on travel, organizations need flexible and evolving travel risk management strategies that go beyond purchasing travel insurance.

These strategies should respond to the different risks present in different territories and the requirements of the different individuals travelling. Businesses also need reliable sources of relevant intelligence and flexible and pre-rehearsed plans in place to ensure a quick and proportionate response to any crisis impacting its people.

“Sadly every week we are currently reminded why having an effective travel risk management framework in place is imperative. As the tragic events in Westminster, Manchester and more recently on London Bridge and Borough Market demonstrate, any destination can become high risk at an intense speed,” Julia Graham, Airmic’s deputy CEO and technical director, commented.

She added: “I urge all risk professionals to review, update and rehearse how they would respond should such an incident impact their organization. Knowing where your people are and how you can communicate with each other in the event of a crisis is especially important.”

The Business Continuity Institute

Overnight a fire raged through a 24-storey tower block in West London, completely destroying it, and claiming several lives. While this may have been a residential building, the speed with which the fire took hold is a clear warning that organizations must have plans to place to ensure the safety of their staff, as well as other stakeholders, should such an incident occur at work.

As land becomes more expensive, the number of high rise buildings being constructed is increasing all the time, with developers constantly striving to build taller and fit more office space on the same footprint of land. Many offices are also being redesigned to become open plan so an even greater number of people can be squeezed into the same square footage. This can come at a cost however. The taller a building gets, the greater the number of people who work within it, the greater the challenges are to find suitable escape routes for everyone should an emergency arise.

Had this building been an office block, had the fire swept through it in the middle of the day, how quickly could it have been evacuated? How quickly could your organization have made sure that all employees, and everyone else in the building, got out safely?

Some of the residents reported that they were only warned of the fire by other residents, not by the fire alarm system. If the fire alarms didn’t work then it is highly likely that the fire suppression system didn’t work either which is perhaps why the fire spread so rapidly. How frequently do you check the alarm system within your building? Can you say with a high degree of certainty that, if a fire occurred, everyone would be sufficiently warned?

It was also reported that some residents who were trapped in the building had resorted to flashing their mobile phone torches to gain attention and seek help. In desperation, this was all they could do. Organizations must have an effective emergency communications system in place so urgent two-way messages can be sent out to confirm that staff are safe, or, if they are not, then they can be located and made safe as soon as possible.

The safety of staff is paramount to business continuity and making our organizations more resilient. Office space and IT can easily be replicated elsewhere - staff cannot. Not to mention, of course, the moral duty to keep them safe. We must ensure that our buildings are safe environments to work in and that, should the worst happen, staff can safely exit the building. Furthermore, we must make sure that whatever plans, processes and procedures we have in place to safeguard our staff are exercised on a regular basis so any flaws can be found and resolved.

David Thorp
Executive Director of the Business Continuity Institute

Wednesday, 14 June 2017 14:48

BCI: Ensuring the safety of our staff

Natural floods are becoming increasingly common. Our guide will help you be ready, react and recover should a flood hit your area.

When flood strikes, it can wreak havoc in two ways.

First is the immediate damage from the water itself; the second is the long-lasting aftermath.

During the UK’s wettest month on record (December 2015), the Environment Secretary at the time - Liz Truss - estimated that around 16,000 homes were flooded, costing millions in recovery and repairs.

However, she also claimed that more than 20,000 homes were protected due to flood defences that had been put into place.



Wednesday, 14 June 2017 14:46

How to minimise flood damage in your home

Despite ample capital and benign claim cost trends, insurers have held the line on trading profitability for volume, while still responding as needed to emerging trends, according to Willis Towers Watson.

Its most recent Commercial Lines Insurance Pricing Survey (CLIPS) shows that commercial insurance prices in the U.S. were nearly flat in the first quarter of 2017.

Price changes reported by carriers averaged less than 1 percent for the sixth consecutive quarter.



Wednesday, 14 June 2017 14:45

Eye on commercial insurance prices

Disaster situations are uncomfortable to think about. Even the most pessimistic among us have been guilty of avoiding the discussion by saying things like “that won’t happen to us.” But that’s the exact mindset you want to avoid when it comes to protecting your business.  

Everything we do in Business Continuity Planning involves risk impact avoidance, mitigation or acceptance. We cannot prevent outage events from occurring; we can only prepare for how to respond or minimize the impact of risks and outage events. During our speaking engagements, or when talking with clients about potential risks and conditions to plan for, we often hear, “that won’t happen” or “that can’t happen.” While I am not superstitious, there are times I wonder if Mr. Murphy is real. Today I will share some of the events that can’t happen – that did. Have you ever thought:

We are too small for any cyber criminals to target us.



Mitigating Risk to Enhance Data Security

In this article, Jason Allaway, RES Area VP for the U.K. and Ireland, reveals what the true cost of a ransomware attack like WannaCry will be in the GDPR era. As many organizations struggle to prepare for the upcoming regulation, Jason shares the three pillars of risk that must be integrated into organizations’ GDPR strategies to protect and secure sensitive data without hindering productivity.

Over the last few weeks, there have been numerous news stories around the WannaCry ransomware attack and the disruption that it has produced. WannaCry has caused major issues and compromised personal data around the world in a very short period of time.  It was reported that more than 200,000 computers were hijacked in more than 150 countries, with victims including hospitals, banks, telecommunications companies and warehouses.

Today, data is worth a lot of money, and cybercriminals know it. This is one of the key reasons why the EU has established requirements around doing more to protect data from breaches with the impending GDPR legislation. In fact, the GDPR compliance deadline of May 25, 2018 is less than one year away.



I had an epiphany today about a major reason open source is disrupting enterprise software. This is perhaps one of those things that you have heard so much, you've gone numb to it. All the big giants are still alive and kicking, however; so is this really happening? The answer is yes, however the mechanics are not what you think. It is not simply just a cost play. The acquisition - one of the main weapons that big software vendors had to fight disruptors - is losing effectiveness.  And that changes everything. Allow me to explain:

In the past, big vendors bought the smaller potential disruptors and got the code and customers. Cash disrupted the disruptor; investors got paid, and customers got the new technology as part of the big vendor's larger suite. Everyone was happy.



The exciting landscape of modern life has been built with the aid of powerful computers. They have done dazzling things, from making the trains run on time to helping to build skyscrapers. Now, imagine a discontinuity in computing in which these capabilities are suddenly expanded and enhanced by orders of magnitude.

You won’t have to imagine too much longer. It is in the process of happening. The fascinating thing is that this change is based on quantum science, which is completely counter-intuitive and not fully understood, even by those who are harnessing it.

Today’s computers are binary, meaning that they are based on bits that represent either a 1 or a 0. As fast as they go, this is a basic, physical gating factor that limits how much work they can do in a given amount of time. The next wave of computers uses quantum bits – called qubits – that can simultaneously represent a 1 and a 0. This root of the mysteries that even scientists refer to as “quantum weirdness” allows the computers to do computations in parallel instead of sequentially. Not surprisingly, this greatly expands the ability of this class of computers.



Communication; the backbone, or cornerstone if you will, of any successful enterprise.  Without it you can have an organization moving in multiple directions causing confusion and all archways falling down around you when you need to be moving forward as a cohesive unit – especially during times of crisis.  What makes it so key to everything?  Why that particular aspect of the Business Continuity Management (BCM) framework?  It’s because communication is glue that holds more together than a disaster response; though of course, very key to a disaster response. It holds us all together and has done since he first two Homo sapiens caught each other’s eye.

Communication is used on a daily basis; from infancy to adulthood through to our autumn years.  A toddler crying communicates its hunger or discomfort and as we get older we communicate the same thing using words or if we have lost that ability, with beautifully choreographed hand gestures.  And we’re communicating in not just the good times but the bad times as well.  It can comfort us when we are feeling down or enrage us when our dander is up.



Tuesday, 13 June 2017 16:14

BCM & DR: Communication is the Key

The Business Continuity Institute

When looking at the potential threats that could disrupt our organizations, it is often physical or virtual events that we first think of – adverse weather, supply chain failure, cyber attack, pandemic. But while we often consider an event or occurrence to be disruptive, do we also consider a lack of activity to be disruptive? Is there something we’re not doing that could lead to a disruption within the organization? In today's digital world, failure to keep up with technology could be just as damaging as any tropical storm.

A new study by Capgemini and Brian Solis has found that 62% of respondents see corporate culture as one of the biggest hurdles in the journey to becoming a digital organization. As a result, companies risk falling behind competition in today’s digital environment. Furthermore, the data shows that this challenge for organizations has worsened since 2011 by 7 percentage points, when Capgemini first began its research in this area.

The Digital Culture Challenge: Closing the Employee-Leadership Gap uncovers a significant perception gap between the senior leadership and employees on the existence of a digital culture within organizations. While 40% of senior-level executives believe their firms have a digital culture, only 27% of the employees surveyed agreed with this statement.

Cyril Garcia, Head of Digital Services and member of the Group Executive Committee at Capgemini, said: “Digital technologies can bring significant new value, but organizations will only unlock that potential if they have the right sustainable digital culture ingrained and in place. Companies need to engage, empower and inspire all employees to enable the culture change together; working on this disconnect between leadership and employees is a key factor for growth. Those businesses that make digital culture a core strategic pillar will improve their relationships with customers, attract the best talent and set themselves up for success in today’s digital world.”

The findings reveal a divide between senior-level executives and employees on collaboration practices with 85% of top executives believing that their organization promotes collaboration internally, while only 41% of employees agreed with this premise.

Corporate culture is equally as important in the business continuity profession, so much so that it features as one of the six professional practices referred to in the Business Continuity Institute's Good Practice Guidelines. Integrating business continuity into the day-to-day business activities is vital to a successful programme, but this can only be achieved with top management support.

The report highlights that companies are failing to engage employees in the culture change journey. Getting employees involved is critical for shaping an effective digital culture and accelerating the cultural transformation of the organization. Leadership and the middle management are critical to translating the broader digital vision into tangible business outcomes and rewarding positive digital behaviors.

“To compete for the future, companies must invest in a digital culture that reaches everyone in the organization. Our research shows that culture is either the number one inhibitor or catalyst to digital transformation and innovation. However, many executives believe their culture is already digital, but when you ask employees, they will disagree. This gap signifies the lack of a digital vision, strategy and tactical execution plan from the top”, said Brian Solis. “Cultivating a digital culture is a way of business that understands how technology is changing behaviors, work and market dynamics. It helps all stakeholders grow to compete more effectively in an ever-shifting business climate."

Every two years, the World Health Organization (WHO) releases a medication list that it believes should be available, if needed, to all the people of the Earth. The latest iteration of the essential medicines list has just been released. It’s a compendium like the ones health insurers maintain to help them determine which medicines should be covered by their policies. Think of it like The Worlds Formulary.

That may sound dull or at least rather wonky. But there are real-world implications when a drug makes — or is not approved for — this list. The move to include HIV drugs in 2002 arguably helped to make lifesaving antiretrovirals available to AIDS patients in developing countries. More recently, the addition hepatitis C drugs to the list appears to have put them on a similar trajectory.

The list is meant to help countries figure out how to prioritize spending on medications. It’s a model that many use to craft their own drug formularies — while individual countries may make tweaks here and there, they don’t each need to set about inventing this wheel.



(TNS) - Ammo casings littered the entrance to the dimly lit gym basement of Fountain Middle school. Cole Davison, a junior at Fountain-Fort Carson, Colo., High School, sat on the stairs yelling at a man with a gun. The man fired a shot at Davison, hitting him in the foot. Police managed to drag the student to safety, then went back into the school. Two gunmen were inside and students were missing.

It's the nightmare no one wants to live, but it's a reality law enforcement officers, school officials and students have to be prepared to handle.

During a three-day training exercise this week, the Fountain Police Department and eight other agencies practiced responding to an active shooter and other emergencies. On Friday, the final day, police and other agencies dealt with a worst-case scenario: two armed intruders with hostages in the basement of the school's gym.



In Japan, disaster learning centers that allow visitors to experience simulated earthquakes, typhoons and fires are gaining five-star reviews on travel sites like TripAdvisor and providing valuable lessons in preparedness.

The Japan Times reports that earthquake simulators have become major tourist draws at more than 60 disaster education centers nationwide and are attracting growing numbers of foreign visitors.

Some attribute the increased interest in disaster prevention education in Japan to the 2011 Tohoku earthquake and tsunami. Others note that tourists today are more interested in life experiences than shopping.



While hackers and cyberterrorists often make headlines, there is a far more common cause for concern when it comes to safeguarding companies’ confidential data – their employees. Research from CEB, now Gartner, indicates nearly 60 percent of privacy failures result from an organization’s own employees and, worse, over half of employee-driven privacy failures result from intentional behavior.

To reduce the chances of employees creating privacy failures, most organizations first create training and communications focused on the importance of data privacy. But these efforts can prove ineffective, especially if they are created with a one-size-fits-all approach. While privacy awareness is certainly important for all employees, messages on how to reduce privacy risks designed for entry-level employees may not be as applicable to managers, or for that matter, senior executives.

To ensure a privacy program drives the right behaviors among all employees, leaders must tailor risk management strategies to the unique characteristics of different employee groups. But what risks do these groups pose?



Monday, 12 June 2017 16:49

Risk Profiles for Key Employee Groups

When WannaCry ransomware hit last month, it highlighted a very serious security problem, one that we just don’t talk about enough. That’s the use of outdated and unsupported operating systems and software.

Even before the massive ransomware attack, I knew how much of a hidden problem this was, mostly through anecdotal evidence. I’ve had informal conversations with people employed in varied industries, including those doing highly sensitive research, who have said they continued to use Windows XP because IT didn’t have the time or budget to upgrade to a newer OS or they just liked XP better than anything else and switched back. We’ve heard stories that Point of Sale systems and IoT still operate on XP because it would be too costly to switch.

BitSight has confirmed my anecdotal evidence. In a new report, “A Growing Risk Ignored: Critical Updates,” the company analyzed more than 35,000 companies from industries across the globe and found that a surprising number of companies continue to run outdated and unsupported operating systems, as well as internet browsers.



The Business Continuity Institute

Data is of incredible value to our organizations. The more we have, the more we can discover about our customers and our market. The more we know about these, the more we can fine tune our products and services to meet their precise needs. Of course that data doesn't just have a value to our own organizations, it also has a value to others, and that is why we need to make sure it is protected.

Over the last few years we've seen some big organizations receive fines of tens of millions of dollars as a result of a data breach, and we've also seen them suffer severe reputational losses. When the General Data Protection Regulations come into force next May, the potential for large fines will increase further for any organization that holds data on EU citizens.

It is important that organizations have processes in place to protect their data, and processes in place to be able to respond in the event of a breach. The BCI has now begun a new study, conducted in collaboration with Mimecast, that seeks to discover the attitudes, behaviours and business continuity arrangements in place related to information security.

Please do support this study by taking a few minutes to complete the survey, it should only take about ten minutes, and each respondent will be in with a chance of winning a £100 Amazon gift card.

The Business Continuity Institute

Like the terrorist attack in Manchester, the response by individuals to the London Bridge attack last Saturday made me proud to be British. The off-duty policeman rugby tackling one of the terrorists, the British Transport Police officer armed with just a baton fighting one of the knifemen and the people who threw bottles, chairs and tables to protect customers in one of the pubs nearby are all heroes who rose to the occasion.

One of the people interviewed about the incident was a gentleman from Royal United Services Institute, who made a comment about how during the incident he saw many people enact the government’s advice of ‘Run, Hide, Tell’. This got me thinking that the emergency services response to both recent attacks and the public’s use of ‘Run, Hide, Tell’ are very good examples of how exercising plans actually makes a difference.

I believe that a few weeks before the Manchester attack, the police had actually practised a very similar exercise to the incident they had to respond to. Last year, there was an exercise at The Trafford Centre, which involved 800 volunteers playing members of the public, in order to test the emergency response to a major terrorist incident. These extensive ‘live’ exercises require a lot of planning and are costly to run, but their worth was proved by the response to the Manchester bombing. In the media coverage of the incident, there was not one bit of criticism directed at the emergency services. This is very stark contrast to, although some time ago, the responses to Hillsborough and the Bradford fire.

In the same way, the ability of the police to respond to the attack in London and kill the terrorists within eight minutes, is again testament to the planning and professionalism of the police. I was training a Bank’s Country Crisis Management team this week and I used both examples as reasons why exercising plans is so important.

I think the public using ‘Run, Hide, Tell’ is important in three aspects. Firstly, it worked and helped to reduce the number of casualties. Secondly, I think as business continuity people we should be teaching this to our staff. A couple of years ago I thought that it would be alarmist and unlikely to be required, but, I think due to the threat level and the number of attacks, it is a useful drill to teach.

Thirdly, I think it illustrates the importance of embedding business continuity and shows that everyone needs to know what to do in an emergency. If employees hear about an incident which has affected your head office, instead of going home and waiting for instructions, they know what to do themselves. If they are a member of the crisis management team, they know to immediately go to the second team location or the work area recovery location if they have recovery roles. This will speed up the response, as lots of time and effort will not be spent telling staff what to do and where to go.

We as business continuity people don’t need to be convinced to exercise our plans, but often those who have roles in the team are reluctant!

P.S. Have a look at the citizenAID App, which has been produced with lots of useful information about how to respond to a terrorist attack.

Charlie Maclean-Bristol is a Fellow of the Business Continuity Institute, Director at PlanB Consulting and Director of Training at Business Continuity Training.

Governments often make legal requirements about things that could damage people’s health, whether in a physical, financial, or possibly other sense.

Motor vehicles must be insured. Underage drinking is forbidden. Enterprises are required to meet health and safety standards for employees and visitors.

Financial institutions must be certified and separate internal finances from customer accounts. With today’s dependency on IT and data, a case could be made for enforcing minimum levels of disaster recovery planning and management.

After all, a systems failure could force a company to shut down, possibly causing severe hardship.



Change Management is a hot topic lately on my social media channels. Like my friend Jon Hall, I also am a long time veteran of the classic Change Advisory Board (CAB) process. It almost seems medieval: a weekly or bi-weekly meeting of all-powerful IT leaders and senior engineers, holding court like royalty of old, hearing the supplications of the assembled peasants seeking various favors. I’ve heard the terms “security theater” and “governance theater” applied to unthinking and ritualistic practices in the GRC (governance, risk, and compliance) space. The CAB spectacle, at its worst, is just another form of IT theater, and it’s time to ring that curtain down.
As a process symbolizing traditional IT service management and the ITIL framework, it’s under increasing pressure to modernize in response to Agile and DevOps trends. However, change management emerged for a reason and I think it’s prudent to look at what, at its best, the practice actually does and why so many companies have used it for so long. 
This was the topic of my most recent research, “Change Management: Let’s Get Back to Basics.” In that report, I cover the fundamental reasons for the Change process. It has legitimate objectives — coordination, risk reduction, audit trail — that do not go away because of Agile or DevOps. The question is rather, how does the modern, customer-led, digital organization achieve them? The classic “issue a request and appear before a bi-weekly CAB” is one way to achieve the desired outcomes — and likely not the most effective means, as I discuss.

In the past, security breaches were viewed as a single event occurring at a certain point in time. However, this is no longer the case. Security threats now rarely occur as singular events, and a new kind of attack is on the rise: Advanced Persistent Threats (APTs). An APT is a network attack in which an unauthorized person or device gains access to a network and, instead of immediately stealing data or damaging infrastructure, stays there for a long period of time, remaining undetected. It could even occur from a device or person with proper security clearance, thus appearing as normal activity. It is much harder to detect these attacks as they are typically small in scope and focus on very specific targets (usually in nontechnical departments where security threats are less likely to be noticed or reported), and occur over a period of weeks or even months.

In 2014, RSA, a cybersecurity company was called into the U.S. government’s Office of Personnel Management to fix a low-level problem. Upon arrival, RSA discovered that there were intruders in the company’s network, and they had been there for over 6 months routinely stealing data in an organized yet inconspicuous manner. If not for the coincidental security check from RSA, the organization would have never noticed the breach. Ironically, the door into the system was unwittingly opened by an employee who accidentally downloaded malware from a spearphishing attack, much like the google docs cyber attack that took place in May. The employee was quickly informed and asked to change his password: he and his company thought the breach ended there, but it continued for months undetected.



Severe weather across the United States in May resulted in combined public and private insured losses of at least $3 billion.

Aon Benfield’s latest Global Catastrophe Recap report reveals that central and eastern parts of the U.S. saw extensive damage from large hail, straight-line winds, tornadoes and isolated flash flooding during last month’s storms.

The most prolific event? A May 8 major storm in the greater Denver, Colorado metro region, where damage from softball-sized hail resulted in an insured loss of more than $1.4 billion in the state alone.



Automation gets a bad rep these days, what with public fear that robots will take over jobs (an invalid assumption – we will be working side by side with them).

However, if you asked the most diehard Luddites if they were ready willing to give up the following:

  • Depositing a check using a mobile app
  • Ordering products on Amazon to receive the next day
  • Accepting a jury duty request online

...they would probably hesitate.



The Business Continuity Institute

New and evolving threats combined with persistent resource challenges limit organizations’ abilities to defend against cyber intrusions, and 80% of security leaders now believe it is likely their enterprise will experience a cyber attack this year. Despite this, many organizations are struggling to keep pace with the threat environment.

ISACA's State of Cyber Security Study found that more than half (53%) of survey respondents reported a year on year increase in cyber attacks for 2016, representing a combination of changing threat entry points and types of threats. IoT overtook mobile as primary focus for cyber defenses as 97% of organizations see a rise in its usage. As IoT becomes more prevalent in organizations, cyber security professionals need to ensure protocols are in place to safeguard new threat entry points.

62% reported experiencing ransomware in 2016, but only 53% have a formal process in place to address it - a concerning number given the significant international impact of the recent WannaCry ransomware attack. Malicious attacks that can impair an organization’s operations or user data remain high in general (78% of organizations reporting attacks).

Additionally, fewer than a third of organizations (31%) say they routinely test their security controls, and 13% never test them. 16% do not have an incident response plan.

“There is a significant and concerning gap between the threats an organization faces and its readiness to address those threats in a timely or effective manner,” said Christos Dimitriadis, board chair and group head of information security at INTRALOT. “Cyber security professionals face huge demands to secure organizational infrastructure, and teams need to be properly trained, resourced and prepared.”

The Business Continuity Institute

Two-thirds of UK businesses believe their organization to be highly protected from attempts by outsiders to gain access to their systems and data, and a similar proportion maintain they have the right processes in place to adequately react to privacy and security threats.

The Willis Towers Watson Cyber Pulse Survey also found that the disparity between corporate feelings of preparedness and the increasing number of cyber security incidents could be a result of lack of responsibility or accountability among employees, the human element of the cyber equation. UK employees ranked ‘insufficient understanding’ (61%) as the biggest barrier to their organization effectively managing its cyber risk. Nearly half (46%) spent 30 minutes or less on cyber security training in 2016, and over a quarter (27%) received none at all.

More concerning for employers is the discovery that, of the employees who did complete cyber training, nearly two-thirds (62%) admitted they “only completed the training because it was required”, and nearly half (44%) believe that opening any email on their work computer is safe. This suggests that the employees may not be engaged or feel the personal accountability necessary to drive long-term, sustainable behaviours.

Anthony Dagostino, Head of Global Cyber Risk, Willis Towers Watson, said: “As the world has seen with the proliferation of phishing scams, most recently highlighted by the global WannaCry ransomware attack, the opening of just one suspicious email containing a harmful link or attachment can lead to a company-wide event. However there appears to be a disconnect between executive priorities around data protection and the need to invest in a cyber-savvy workforce through training, incentives and talent management strategies.”

The survey also detailed additional barriers that companies feel impact their cyber preparedness and the degree to which corporations are providing cyber training to their employers. Nearly a third (30%) of employees surveyed have logged into their work-designated computer or mobile device over an unsecured public network (such as public Wi-Fi). Only 1 in 4 (40%) of the employers surveyed felt that they had made progress addressing cyber security factors tied to human error and behaviours in the last three years

It is issues such as these that were raised in the Business Continuity Institute's cyber security report, published during Business Continuity Awareness Week, and highlighted several areas in which users can leave their organizations vulnerable to a cyber attack.

“Hackers are exploiting the fact that while corporations are building walls of technology around their organizations and their networks, by far the biggest threat to corporate digital security and privacy continues to come from the employees within, often completely by accident,” said Dagostino. “A truly holistic cyber risk management strategy requires at its core a cyber-savvy workforce, however organizations first have to know where the vulnerabilities are in order to plug the gaps. Many organizations are facing talent deficiencies and skills shortages in their IT departments, which in turn are creating significant loopholes in their overall security measures.”

Three items – two surveys and a government study – that were released in recent weeks show just how serious the Internet of Things (IoT) security situation is.

Altman Vilandrie & Company found that 48 percent of responding organizations’ IoT networks have been breached, some more than once.

The survey, which included results from almost 400 firms, also touched on budgetary issues. IoT security breaches are equal to 13.4 percent of annual total revenue of firms that take in less than $5 million. Almost half of larger companies with annual revenues of more than $2 billion estimated just one breach could cost more than $20 million, according to the press release.



Thursday, 08 June 2017 14:20

IoT Security: Even Worse Than You Think

(TNS) - Disaster can strike at any time, which means the Georgia Emergency Management Agency, Homeland Security, is open every minute of every hour of every day.

In a bunker in the GEMA headquarters’ basement on the east side of Atlanta, people are always ready to answer the call when calamity strikes.

From a terror attack, to hurricanes, wildfires or avian flu outbreaks, GEMA has a hierarchy of emergency specialists ready to mobilize.

Wednesday morning, Macon-Bibb County’s emergency planners visited the State Operations Center for their regular Emergency Support Functions meeting.



The cloud is quickly expanding from a general-purpose data services platform to a series of highly targeted industry vertical solutions, further decreasing the need for businesses of all types to maintain infrastructure to support their operational models.

This is proving to be a crucial niche for smaller cloud players, and even software developers incorporating cloud services into their platforms, as they attempt to carve market share from hyperscale providers likes Google and Amazon.

Earlier this year, ZDNet’s Manek Dubash highlighted the steady shift toward vertical clouds, noting that while initial cloud deployments were earmarked for bulk storage and generic processing, organizations are now looking for the same customized environments in the cloud that they have built up in the local data center. Initially, of course, vertical clouds gravitated toward leading industries like health care and finance, but this is changing as digitalization puts pressure on all sectors of the economy to streamline infrastructure and augment their product lines with digital services.



SEATTLE – A year following one of the nation’s largest domestic drills, lessons-learned continue to guide strategies that improve the Pacific Northwest’s ability to survive and recover from a catastrophic Cascadia Subduction Zone (CSZ) earthquake and tsunami.

On June 7, 2016, more than 20,000 emergency managers in Idaho, Oregon and Washington kicked off Cascadia Rising 2016, a four-day, large scale exercise to test response and recovery capabilities in the wake of a 9.0 magnitude CSZ earthquake and tsunami. The exercise involved local, state, tribal and federal partners, along with military commands, private sector and non-governmental organizations.

Lessons learned from Cascadia Rising 2016

"I'm pleased the momentum from Cascadia Rising continues to gain speed," said Maj. Gen. Bret Daugherty, director of the Washington Military Department and commander of the Washington National Guard. "As a result of the exercise, our governor directed the formation of a Resilient Washington sub-cabinet, a multi-agency workgroup charged with improving our state's resiliency. Cascadia Rising also guided our decision to change our recommendation on preparedness, so we're now telling people to have enough emergency supplies to stay on their own for up to two weeks."

“Cascadia Rising was the largest exercise the State of Oregon has ever conducted. The complexity of the four-day exercise provided an unprecedented opportunity to examine and assess response and emergency management practices, and identify areas where we excel and where we can improve,” said Oregon Office of Emergency Management Director Andrew Phelps. “The collaboration among all levels of government, and with our private sector partners leading up to and during the exercise, was outstanding. I believe these relationships were strengthened through this experience and will continue to grow as we work toward enhancing our preparedness posture.”

“In addition, Cascadia Rising served as a reminder to all Oregonians that individual and family emergency preparedness is key to an effective response to an earthquake or any disaster and begin the recovery process,” said Phelps. “As we constantly improve our capabilities, we ask all to be prepared for at least two weeks.”

Idaho’s participation helped raise awareness that the residual effects of an earthquake and tsunami along the coast would be felt in Idaho. That includes the possible need to accommodate tens of thousands of evacuees and displaced persons who were directly impacted.

“The countless strong partnerships we cultivated in the years leading up to the exercise proved invaluable to the success of Cascadia Rising in Idaho,” said Gen Brad Richy, of the Idaho Office of Emergency Management. “The collaboration with FEMA Region 10, and our Idaho counties, is proving indispensable as Idaho currently manages one of the most challenging flood seasons on record. Thirty-one of Idaho’s 44 counties have disaster declarations in place right now. When people ask about the importance of exercises, I like to point out that lessons learned during Cascadia Rising 2016 have improved our swift and effective response to the 2017 flooding disasters.”

“The Cascadia Rising 2016 exercise highlighted a number of critical areas that we, the emergency management community, should improve before this fault ruptures, which will impact large portions of our residents and infrastructure. It is exercises like this, that foster coordination and help build relationships before a real-world event occurs,” said Sharon Loper, Acting FEMA Region 10 Administrator. “The exercise highlighted a number of infrastructure interdependencies our residents have come to rely on, such as electricity, communications, fuel, water and our roads.  Most of these sectors would be heavily disrupted after a CSZ event and plans are being developed and exercised that focus on the efficient recovery of these essential services.  In this past year, FEMA Region 10 has made improvements in coordinating disaster logistics, family reunification strategies and mass power outage scenarios with our partners.” 

“Every exercise teaches us something and improves our response,” said Loper. “I’m pleased so many partners and community members collaborate on these important issues. We should continue to work together so that we are all better prepared to protect lives and property.” 


Lying mostly offshore, the plate interface that is the Cascadia Subduction Zone is a giant fault approximately 700 miles long. At this location, the set of tectonic plates to the west is sliding (subducting) beneath the North American plate. Friction prevents movement of these two plates; ultimately, these plates are stuck. The stress of these boundaries is continuously building until the fault suddenly breaks, resulting in a potentially devastating 700-mile earthquake and ensuing tsunami along the California, Oregon and Washington coastlines. Last year’s Cascadia Rising 2016 exercise was to test plans and procedures through a 9.0M earthquake and follow-on tsunami with expectations to improve catastrophic disaster operational readiness across the whole community.

The Cascadia Subduction Zone off the coast of North America spans from northern California to southern British Columbia.
The Cascadia Subduction Zone off the coast of North America spans from northern California to southern British Columbia. This subduction zone can produce earthquakes as large as magnitude 9 and corresponding tsunamis. Download Original

 Cascadia Rising 2016 was a four-day exercise focused on interagency and multi-state coordination following a 9.0M Cascadia Subductions Zone earthquake and follow-on tsunami. Emergency management centers at local, state, tribal and federal levels in coordination with military commands, private sector and non-governmental organizations in Washington, Oregon, and Idaho, activated to coordinate simulated field response operations.

Related Links
The Business Continuity Institute

When we're developing our business continuity programmes, do we consider political rows a threat to our organization? Do we consider whether a dispute between countries could filter down and affect us? Political tensions certainly exist worldwide, you only have to look at the relationship between the US and Mexico to understand this, or the tensions that are growing between the UK and the EU as Brexit looms closer.

Qatar has now found itself at the centre of such an issue as many of its neighbouring Gulf States are cutting diplomatic ties and closing borders. Saudi Arabia, Bahrain, Egypt, the UAE and Yemen have all turned their backs on Qatar, leaving it in isolation, and Qatari citizens in those countries have been given two weeks to leave.

Qatar is the world's largest suppliers of liquefied natural gas, although exports don't seem to be affected so far. The problem is imports, which Qatar relies on for 80% of its food. With its only land border closed on Saudi's insistence, a backlog of lorries has now been held up and is waiting to be re-routed.

The problem is further exacerbated as many containers destined for Qatar arrive via Dubai where they are transferred to smaller vessels to complete the journey. With a ban all vessels travelling to, or arriving from, Qatar, this is no longer an option.

Added to this, many infrastructure and construction projects in Qatar use consultants from elsewhere in the Middle East, and those consultants are now unable to travel to Qatar to support these projects, meaning delays are inevitable.

Clearly the cause of the diplomatic tension is important, but organizations must think beyond this and consider how it will have an impact on them and their supply chains. While it is unknown how long this situation will last, a similar incident occurred in 2014 and went on for nine months, so it may not end any time soon.

Most people think of Mail-Gard in terms of disaster recovery support. It’s what we’re known for, and intuitively, people understand that during a flood, tornado, or major power outage, a backup partner is necessary to ensure important documents are still delivered to customers to keep your business running without interruption.

But bad weather and Acts of God aren’t always to blame for the times when a company’s in-house print and mail operations may be swamped. Seasonal volume swings or equipment upgrades can leave in-house operations overwhelmed, and they simply can’t keep up. That’s when Mail-Gard’s print outsourcing comes in, and it’s another area where we can be there for you in a time of need.



If you’ve read through our recent post on ISO Business Continuity Standard 22301, you know the components involved in building a high-performing program. Still, it can be a daunting task to meet this complex standard; how can you be sure you have all the angles covered? Where should you even start?

To ensure consistency and completeness as you develop your program, we’ve designed an ISO 22301 checklist. If you can verify that your program has each of the following elements associated with Sections 5-10 of the standard, your company does indeed have the organized and thorough continuity program outlined in ISO 22301. You can also use it as an ISO 22301 audit checklist if your company is preparing to undergo an official certification process. *The starred items are where most companies fall short, in our experience, so pay special attention to your efforts in those areas.

A crucial part of meeting business continuity standards like ISO 22301 is a well-written business recovery plan. Find out the components of a successful plan and get sample checklists in this free guide.



The Business Continuity Institute

By secretlondon123 (Flickr: polling station) [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

It is election day in the UK tomorrow, and a chance to vote for who we want to represent us in Parliament, and ultimately who we want to lead our country into the future. A month ago we could probably have said with some degree certainty who the winner would be and by how much, but now that certainty is gone. The gap has narrowed and we’re not entirely sure what direction the country will take from Friday morning onwards.

There could still be a majority Government for the Conservative Party, or they could lose that majority and seek to form a Coalition Government, or even try to make it work on their own. It is not completely out of the question that the Labour Party could win.

For many countries over the last few decades, the election of a new government has arguably resulted in very little noticeable change. The policies of our leading parties may vary slightly, but it hasn’t usually made a substantial difference to us or our organizations.

Politics is changing though, and the leading parties all over the world are moving further apart on the political spectrum. You only have to look to the US and French Presidential Elections to observe the deep divides that are appearing between political parties and across the population.

The UK is no different. Given the current split between the Conservative and Labour Parties, the outcome could determine whether we will have more years of austerity and the privatisation of public services, or whether we will have increased public spending and nationalisation. It could determine whether we have a soft Brexit, a hard Brexit, or perhaps even no Brexit at all. The impact will not just be felt in the UK, but all across the European Union and perhaps even further afield.

It is this uncertainty that puts business continuity professionals in their element – being able to analyse what the possible outcomes could be, what impact they could have on the organization and what mechanisms could be put in place to prevent them from becoming an issue.

Whenever an election does occur, whether it is in your organization’s home country, or one it does business with, business continuity professionals should be studying the manifestos of the major parties to consider how much of an impact the different policies could have on their organization.

Will there be more or less regulation? Will there be more or less public spending? Will there be more or less interference from Central Government? Whatever the answer to these questions, our organizations will have to consider the appropriate responses. These considerations should also go beyond the direct impact of the policies, and also consider the unintended consequences, for example, will certain policies result in increased protester activity that could lead to disruption.

We cannot predict the future, and in countries where there are free and democratic elections, there is no way of knowing for certain what the outcome of those elections will be, but we can prepare for it. We can ensure that our organizations are more resilient to the changes that may come about as a result.

If our organizations are to achieve continued growth, then they must be adaptable to change, wherever that change may come from. They must be able to overcome uncertainty, wherever that uncertainty lies. And they must be prepared for the future, whatever it holds.

David Thorp
Executive Director of the Business Continuity Institute

One of the biggest challenges with shifting applications from an on-premises environment into a public cloud is the sheer volume of data that often needs to be moved. The amount of time and effort involved in a cloud migration for many IT organization has been nothing less than daunting.

Veritas Technologies today announced it has significantly simplified those data migration issues with the launch of Veritas CloudMobility, which allows IT organizations to employ software that Veritas originally developed to back up applications to a cloud migration. Alex Sakaguchi, director of global solutions marketing for Veritas, says the difference now is that data migration into the cloud can be executed via a single mouse click.

“It’s based on the same technology we use for disaster recovery,” says Sakaguchi.

Veritas today also announced Veritas CloudPoint, which enables IT organizations to much more aggressively schedule the capturing of snapshots of data residing in multiple public clouds as part of an effort to accelerate recovery time and point objectives.



Ransomware defense is often an uncomfortable subject where enterprises must face some hard truths and new responsibilities. Nevertheless, it’s becoming increasingly necessary. 

According to the FBI, there were an average of 4,000 ransomware attacks per day in 2016. This represents a 300% increase from 2015. Unfortunately, when we consider data breaches, we are usually talking about how organizations are prepared and will act during a breach, not whether a breach will occur. We see and hear about ransomware at an increased rate, most recently the Wannacry attack. Wannacry infected hundreds of thousands of computers in over 150 countries, from individuals to large organizations.



This is part 6 of a multi-part series on the Analytics Operating Model.

In our recent blog, Data Oriented Architecture: Laying the Right Foundation, we examined the challenges of selecting an enterprise platform for big data and approaches for overcoming those challenges. As we continue this series, we dive deeper to explore each element of a sound analytics capability in an era where data is plentiful and computing is powerful, but results have not yet lived up to the hype.

We live and work in an age where every device and sensor can generate and transmit data. Companies must develop capabilities that allow them to identify and manage the right data resources. A key driver of success is an organization’s ability to manage data in a way that maximizes value through analytics data management practices. High quality, accurate data is crucial to a successful algorithm and ensures that data-driven decisions are based on facts. Further, companies must also consider how this data fits into the broader organizational value chain in which they are operating to fully extract and understand the value stored in their data.

The ability to successfully deliver analytics solutions is dependent on an organization’s ability to develop an understanding of how different data sets contextually fit into the organization by managing data as an asset and establishing an enterprise data model.



Before my last deployment (quite a while ago, thankfully) my unit was training on a variety of tactics to make us all more effective in an operational setting.  That’s the long way of saying we were all getting PT'd repeatedly and learning how terrible we were at stopping the bad guys, luckily we all got better as time went on.  Anyway... 

One of the most valuable lessons we learned from working with the guys in some of the more “special” operational roles was that things shouldn’t be fair. 

In other words, the bad guys didn’t play fair…Why should we?



Wednesday, 07 June 2017 15:05

For More Cyber Operations Wins, Cheat…

Hyperconverged infrastructure (HCI) is typically thought of as a web-scale data center solution. But it turns out that the technology’s space and management advantages are making it equally compelling for small and medium-sized businesses (SMBs).

Everyone has a vested interest in streamlining their data infrastructure, but for SMBs, the need is even greater because they are facing the same Big Data explosion, relative to their size, as larger organizations but lack the finances and manpower to build a traditional enterprise environment. There is always the cloud, of course, but costs tend to scale with both data loads and the level of service required, and performance tends to be lacking compared to state-of-the-art, on-premises resources.

Mike Grisamore, vice president of small business sales at CDW, argues in Biz Tech Magazine that with entry-level HCI systems now starting at $25,000, small organizations have an opportunity to kick their IT infrastructure into the future without blowing their typically tight budgets. The two key use cases for HCI are small organizations that are due for a hardware refresh and those that are launching specialized projects. And the best part is that with a modular infrastructure, organizations can start small and easily add modules as requirements scale, usually with limited in-house technical staff, or none at all.



Wednesday, 07 June 2017 14:59

SMBs Warming Up to Hyperconvergence

The Business Continuity Institute

The Conservative, Labour and Liberal Democrats’ manifestos all ignore the true potential impact of Brexit, a new report by academics from The UK in a Changing Europe shows. Brexit may impose significant economic costs, at least in the short-term, yet all the parties make pledges as if the post-Brexit world will be a case of business as usual.

The report, Red, Yellow and Blue Brexit: the manifestos uncovered, highlights the challenge Brexit represents for the British state. The civil service will need,among other things, to coordinate the negotiations, draft the Great Repeal Bill and prepare primary legislation while the necessary administrative and regulatory structures will need to be put in place before the UK leaves the EU. Yet the manifestos, with their ambitious policy pledges, fail to take account of the constraints this process will place on administrative resources.

Professor Anand Menon, The UK in a Changing Europe director, said: “The majority of the next parliament will be a post-Brexit one. It will have to deal with the implications of one of the most important and difficult decisions that Britain has ever taken. What a shame the parties did not factor this into their plans.”

The Conservative policy to reduce net immigration to the tens of thousands is also likely to have severe economic consequences. The party does not quantify the consequences of its immigration policies. However, the Office for Budget Responsibility has estimated the fiscal impact of a reduction of net migration from 265,000 to 185,000 at about £6bn a year by 2021.

Labour want to maintain membership of the single market but will end freedom of movement when the UK leaves the EU. Their position is thus fatally flawed as the EU will demand acceptance of all four freedoms in return for membership of the single market.

Labour states it would immediately guarantee the rights of EU nationals in the UK and ‘secure reciprocal rights’ for UK nationals in the EU. However, the EU has made clear there will be no final agreement on any one area until there is agreement in all areas. Also, there is no detail on how the huge administrative challenges will be met.

In foreign and security policy, the report notes that there is strikingly little of substance in any of the manifestos as to how Brexit might impact on Britain’s international role. Nowhere are strategic priorities laid out.

None of the manifestos comprehensively addresses what will happen in relation to EU public health law and policy. Nor are the profound challenges Brexit poses to the devolution settlement grappled with.

There is no mention of the jurisdiction of the European Court of Justice in the Conservative manifesto – previously a clear ‘red line’. Yet there is absolutely nothing in the Tory manifesto to reassure key sectors like pharmaceuticals, financial services, and the automotive industry, whose regulatory position, access to markets, or supply chains are threatened by Brexit.

Labour provides no detail as to why ‘no deal’ is the worst possible option for Britain, rejecting it as a viable alternative but failing to make clear how the EU27 could be made to agree to this.

A Liberal Democrat government would be caught between negotiating a very close relationship with the EU and arguing such a relationship would not be preferable to remaining.

Professor Menon said: “What is striking, is that while all three parties view Brexit as a major event, the manifestos treat it largely in isolation from other aspects of policy, rather than the defining issue of the next parliament.”

Wednesday, 07 June 2017 14:57

BCI: Manifestos Hide Truth About Brexit

The Business Continuity Institute

4 in 10 organizations believe that C-level executives, including the CEO, are most at risk of being hacked when working outside of the office, according to a new study by iPass. Cafes and coffee shops were ranked the number one high-risk venue by 42% of respondents, from a list also including airports (30%), hotels (16%), exhibition centres (7%) and airplanes (4%).

Compiling the responses of 500 organizations from the US, UK, Germany and France, the annual iPass Mobile Security Report provides an overview of how companies are dealing with the trade-off between security and the need to enable a mobile workforce. Indeed, the vast majority (93%) of respondents said they were concerned about the security challenges posed by a growing mobile workforce. Almost half (47%) said they were ‘very’ concerned, up from 36% in 2016. Furthermore, more than two thirds of organizations (68%) have chosen to ban employee use of free public Wi-Fi hotspots to some degree (compared to 62% in 2016), while 33% of organizations ban employee use at all times, up from 22% in 2016.

“The grim reality is that C-level executives are by far at the greatest risk of being hacked outside of the office. They are not your typical 9-5 office worker. They often work long hours, are rarely confined to the office, and have unrestricted access to the most sensitive company data imaginable. They represent a dangerous combination of being both highly valuable and highly available, therefore a prime target for any hacker,” said Raghu Konka, vice president of engineering at iPass. “Cafes and coffee shops are everywhere and offer both convenience and comfort for mobile workers, who flock to these venues for the free high speed internet as much as for the the coffee. However, cafes invariably have lax security standards, meaning that anyone using these networks will be potentially vulnerable.”

Man-in-the-middle attacks, whereby an attacker can secretly relay and even alter communications without the mobile user knowing, were identified by 69% of organizations as being of concern when their employees use public Wi-Fi. However, more than half of respondents also chose a lack of encryption (63%), unpatched operating systems (55%), and hotspot spoofing (58%) as chief concerns.

The dangers that using public Wi-Fi creates was an issue raised in the Business Continuity Institute's cyber security report, published during Business Continuity Awareness Week, which also highlighted several other areas in which users can leave their organizations vulnerable to a cyber attack.

Some of the other findings of the iPass report and regional trends include:

  • The US (98%) is most concerned by the increasing number of mobile security challenges – compared to France (88%), Germany (89%) and the UK (92%)
  • Nearly one in ten UK organizations (8%) said that they have no security concerns when employees use public Wi-Fi hotspots. In contrast, this figure is one percent in the US and Germany, and 2% in France
  • Similarly, UK organizations are the least likely to ban the use of public Wi-Fi. 44% said that they have no plans to do so, as opposed to 8% in Germany, 10% in the U.S. and 15% in France
  • Worldwide, 75% of enterprises still allow or encourage the use of MiFi devices. In France, however, 29% of businesses have banned them due to security concerns

“Organizations are more aware of the mobile security threat than ever, but they still struggle to find the balance between security and productivity,” continued Konka. “While businesses understand that free public Wi-Fi hotspots can empower employees to do their job and be more productive, they are also fearful of the potential security threat. Man-in-the-middle attacks were identified as the primary threat, but the entire mobile attack surface is getting larger. Organizations must recognize this fact and do their best to ensure that their mobile workers are securely connected.”

“Sadly, in response to this growing threat, the majority of organizations are choosing to ban first and think later. They ignore the fact that, in an increasingly mobile world, there are actually far more opportunities than threats. Rather than give in to security threats and enforce bans that can be detrimental or even unenforceable, businesses must instead ensure that their mobile workers have the tools to get online and work securely at all times.”

The cliché of “change is the only constant” is true for most enterprises. Customers, business analysts, and employees all expect some sort of evolution, even if it is with varying degrees of enthusiasm.

Even the minority whose positioning is deliberately one of no change (providers of traditional goods and services) are affected by changes in the way governments tax and regulate them, or how suppliers supply to them.

When it comes to business continuity, plans and management must keep pace with business changes too. But is an annual, a quarterly, or even a monthly BCP review the right way to stay synchronized?

There is a dilemma with business continuity planning and reviews. Making them too infrequent means greater risk of being out of alignment and less able to react effectively to threats of business interruption.



The data center is on a clear trajectory toward greater abstraction, greater resource distribution, and greater diversity in both the workloads it supports and the technologies it brings to bear.

All of this leads to an increasingly complex management challenge that pits the need for greater autonomy among users and applications against the needs of the enterprise to maintain data availability and security while keeping budgets under control.

According to Shay Demmons, executive VP of BaseLayer’s RunSmart software division, this challenge is compounded by the fact that most organizations are branching into new IoT and service-level data architectures that must reach back to legacy infrastructure for crucial data support. This calls for a “looking forward, looking backward” management approach that, in fact, utilizes many of the same technologies that are driving the transition to digital services – things like sensor-driven data systems, advanced visibility and intelligent automation that propel workflow management and resource allocation to the speed of modern business.



With so many files in existence and so many more being created every moment, it’s no wonder so many breaches and data loss incidents occur. We asked the experts for some of the top tips on keeping storage data protected.

1. Limit and Monitor Access

Many of the big data breaches we read about in the news trace their origins back to one of these two issues, and most likely both: too much access and little or no monitoring of that access. These are some of the biggest problems in data security, according to Rob Sobers, Director at Varonis.

The 2017 Varonis Data Risk Report, found that 20 percent of folders are open to every employee. Forty-seven percent of organizations in the report had at least 1,000 or more sensitive files containing personal data, health records, financial information or intellectual property open to every single user. Not only are sensitive files open to more people than necessary, but access abuse is not monitored and flagged. This is why 63 percent of data breaches take months or years to detect, according to the report.



Tuesday, 06 June 2017 15:14

8 Vital Data Protection Tips

(TNS) - With an above average hurricane season predicted, the lack of leadership at two agencies responsible for protecting the United States' coast lines should be a sobering thought, said a widely admired general who led the military’s response to Hurricane Katrina.

The National Oceanic and Atmospheric Administration, which runs the National Hurricane Center, and the Federal Emergency Management Agency are both without leaders. Those positions must be appointed by President Donald Trump and confirmed by the U.S. Senate, CNN reported.

“That should scare the hell out of everybody,” retired Lt. Gen. Russel Honoré told CNN. “These positions help save lives.”

Honoré, who served as commander of Joint Task Force Katrina and coordinated military relief efforts, told CNN that the disaster proved “how important leadership was.”



Many critical industries such as nuclear energy, commercial and military airlines—even drivers’ education—invest significant time and resources to developing processes. The data center industry … not so much.

That can be problematic, considering that two-thirds of data center outages are related to processes, not infrastructure systems, says David Boston, director of facility operations solutions for TiePoint-bkm Engineering.

“Most are quite aware that processes cause most of the downtime, but few have taken the initiative to comprehensively address them. This is somewhat unique to our industry.”



Nearly 6.9 million homes along the Gulf and Atlantic coasts have the potential for storm surge damage with a total estimated reconstruction cost value (RCV) of more than $1.5 trillion.

But it’s the location of future storms that will be integral to understanding the potential for catastrophic damage, according to CoreLogic’s 2017 storm surge analysis.

That’s because some 67.3 percent of the 6.9 million at-risk homes and 68.6 percent of the more than $1.5 trillion total RCV is located within 15 major metropolitan areas.



The Business Continuity Institute

Recently, the National Crime Agency (NCA) and National Cyber Security Centre (NCSC) launched its first joint report into ‘The cyber threat to UK businesses’. Outlined are the key trends that are expected to be seen across the cyber security industry over the coming months. Ransomware, which has experienced rapid growth over the last year and presents a hugely lucrative industry for cyber criminals, was recognised as an escalating threat to UK businesses.

Creating and deploying ransomware has never been easier. Malicious code needed to create the ransomware can now be readily outsourced, with 'Ransomware as a Service' models already available on the dark web, where wannabe attackers can purchase ready-made malware packages. This ease of procurement, teamed with the financial opportunity associated with targeted attacks, means ransomware will continue to be a huge threat in 2017.

The targets?

This increased accessibility has significantly broadened the variety of potential attackers in recent years, and as such it’s hard to generalise around the motivations of individuals. Whether it's lone actors operating from a bedroom, a politically-motivated hacktivist, or an international criminal organisation with salaried employees, everyone is a target to someone.


Individual consumers and smaller organisations represent low value targets. At this end of the spectrum, ransomware is a numbers game, and attackers tend to follow the path of least resistance. In practice, that means working through organisations that meet certain basic criteria (e.g. charities in London, with <£5m turnover), or individuals that represent demographics with little to no education in cyber security.


Larger organisations with valuable datasets and a public reputation to protect obviously represent high-value targets, and often attract the most sophisticated attacks as a result. One of the key dictators of severity is the level of access privileges held by the infected user. This makes power users such as sysadmins and senior executives far more valuable targets than ordinary users. Attackers can spend weeks or even months probing attack vectors in order to locate senior individuals susceptible to compromise.

The recipe for a successful attack

Whoever the target is, the rise of cryptocurrencies has increased the degree of anonymity afforded to criminals taking ransom payments. Cyber criminals balance risk and reward. Taking payments as cryptocurrency means the reward has stayed constant, whilst the risk of being caught has dropped significantly.

Although the government’s report advised UK organisations to combat cyber attacks by reporting attacks, promoting awareness and adopting cyber security programmes, it failed to acknowledge the more immediately actionable role that good business continuity practices can play in surviving and recovering from cyber attacks. Whilst outright prevention of a ransomware attack may be impossible, good continuity practices, such as a carefully tailored backup solution, can effectively negate the consequences.

What continuity practices can organisations implement to ensure they recover as quickly as possible?

Planning your defence

Something that was omitted from the government’s advice report is the importance of having an effective incident response plan in place. We typically advise that companies should plan for impacts and test for scenarios. Impact-based planning works on the basis that while there are an infinite number of possible disasters, the number of potential consequences at the operational level is much smaller. Scenario-based planning asks users to anticipate the consequences of a disastrous event and to create solutions ahead of time.

However, certain threats do warrant specific response plans, and this is certainly the case for ransomware. Ransomware can lie dormant on servers for a period of time to deliberately out-last a back-up strategy. As a result, it needs a different approach and plan to recover effectively.

Testing, testing, testing

Once this plan has been established, it is vital to then test that plan and make sure it works. Where this isn’t possible, organisations should run exercises such as a tabletop test as a minimum. This involves organisations responding to a simulated disruption by walking through their recovery plans and outlining their responses and actions.

Plans should be regularly reviewed, updated and tested. This ensures that in the event of an incident, plans can be executed as effectively as possible with minimum impact to everyone concerned. It would be advisable for UK organisations to make a ransomware attack the next focus of any future continuity planning if they have not done so already.

Road to recovery

In a ransomware attack a business will have two choices: recover the information from a previous back-up or pay the ransom. In many cases, even when a ransom has been paid, the data has not been released, so paying does not guarantee you will get your data back.

There are two main objectives when recovering from ransomware. To minimise the amount of data loss and to limit the amount of IT downtime for the business. The fastest way to recover from most incidents is to fail-over to replica systems hosted elsewhere. But these traditional disaster recovery services are not optimised for cyber threats. Replication software will immediately copy the ransomware from production IT systems to the offsite replica. This software will often have a limited number of historic versions to recover from so by the time an infection has been identified, the window for recovery has gone. This means that ransomware recovery can be incredibly time consuming and requires reverting to backups. This often involves trawling through historic versions of backups to locate the clean data.

The rise of ransomware will only increase so organisations must regard infection as a matter of ‘when’ rather than ‘if’ and take the appropriate steps to mitigate the risks. The advice from the government provides a solid foundation but it is imperative organisations have an effective response plan and backup strategy to support it.

Peter Groucutt, managing director at Databarracks

Tuesday, 06 June 2017 14:01

BCI: How to Survive a Ransomware Attack

The Business Continuity Institute

84% of UK small business owners and 43% of senior executives of large companies are unaware of the forthcoming General Data Protection Regulation, despite there now being less than a year to go until the law comes into force, a law that is designed to bring greater strength and consistency to the data protection given to individuals within the European Union.

Shred-It's seventh annual Security Tracker survey also found that only 14% of small business owners and 31% of senior executives were able to correctly identify the fine associated with the new regulation – up to €20 million or 4% of global turnover. This is despite a large proportion of senior executives (95%) and small business owners (87%) claiming to have at least some understanding of their industry’s legal requirements.

Businesses which are unaware of the forthcoming legislation and its implications are not only putting themselves at risk of severe financial penalties, but also the reputational damage caused by adverse publicity associated with falling foul of the law. This can often have a greater impact than the fine itself. Research shows that 64% of executives agree that their organization’s privacy and data protection practices contribute to reputation and brand image.

Data breaches are already the second greatest cause of concern for business continuity professionals, according to the Business Continuity Institute's latest Horizon Scan Report, and once this legislation comes into force, bringing with it higher penalties than already exist, this level of concern is only likely to increase. Organizations need to make sure they are aware of the requirements of the GDPR, and ensure that their data protection processes are robust enough to meet these requirements.

Of those respondents who claim to be aware of the legislation change, only 40% of senior executives have already begun to take action in preparation for the GDPR, in spite of 60% agreeing that the change in legislation would put pressure on their organization to change its policies related to information security.

The survey also highlights that companies feel the UK Government needs to take more action. 41% of small business owners (an 8% increase from 2016) believe that the Government’s commitment to information security needs improvement.

Robert Guice, Senior Vice President Shred-it EMEAA, said: “As we approach May 2018, it’s crucial that organizations of all sizes begin to take a proactive approach in preparing for the incoming GDPR. From implementing stricter internal data protection procedures such as staff training, internal processing audits and reviews of HR policies, to ensuring greater transparency around the use of personal information, businesses must be aware of how the legislation will affect their company to ensure they are fully compliant.”

“Governmental bodies such as the Information Commissioner’s Office (ICO), must take a leading role in supporting businesses to get GDPR ready, by helping them to understand the preparation needed and the urgency in acting now. The closer Government, information security experts and UK businesses work together, the better equipped organizations will find themselves come May 2018.” 

While risk oversight has always been an important part of the board’s agenda, the disruptive financial crisis of 2007-2008 taught everyone a lesson about just how important it is. In the aftermath of the global financial meltdown and credit crunch, risk oversight became an imperative for boards of public companies, particularly in the United States. Boards of listed companies on U.S. stock exchanges across all industries took a hard look at their membership, how they operated and whether their operations and the information to which they have access are conducive to effective risk oversight.

In addition, since the financial crisis, regulators have taken an active interest in board risk oversight. For example, the Securities and Exchange Commission in the United States requires that proxy disclosures shine the spotlight on the board’s role in overseeing the company’s risk management process, directors’ qualifications for understanding the entity’s risks and evaluation of the entity’s various compensation arrangements by the board’s compensation committee to ensure they are not encouraging the undertaking of excessive, unacceptable risks.

As a result, the risk oversight playbook has evolved over recent years, during which time many boards formulated their respective approaches to risk oversight and organized themselves accordingly. To that end, in 2009, the National Association of Corporate Directors (NACD) published its Report of the NACD Blue Ribbon Commission – Risk Governance: Balancing Risk and Reward. This report recommends 10 principles to assist boards in strengthening their oversight of the company’s risk management.



Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.

The Existing Environment

Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.

The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.



Some managed services providers and IT services companies may view the cloud as a threat, hesitant to relinquish control of their customers’ environments to larger players such as Microsoft, Amazon or Google.

But the cloud isn’t going anywhere, and enterprises are already demanding the features and benefits it provides.

Savvy MSPs and IT services firms aren’t resisting incorporating the cloud into their offerings—they’re embracing it!

Why are these companies willing to build solutions on someone else’s cloud infrastructure?



This is part 5 of a multi-part series on the Analytics Operating Model.

As many organizations embark on the transition from “proof-of-concept” to production, selecting the optimal data platform can be a task not for the faint of heart. According to Gartner, through the end of 2017, approximately 60 percent of big data projects will fail to go beyond piloting and will be abandoned. To further compound this statement, only 15 percent of those deployed projects will make it to production as compared to 14 percent in 2016. These numbers beg the question: Why can’t enterprises make the leap?

The traditional IT concept of maintaining infrastructures and applications for years no longer applies. As the needs of the enterprise mature, so should the ecosystem. Capabilities to handle real-time streaming data, in-flight transformation, and the shift from Extract-Transform-Load (ETL) to Extract-Load-Transform (ELT)/Schema-on-Read are no longer concepts, but reality that require adoption of newer tools and techniques to promote these trends.



The Business Continuity Institute

Welcome to the age of disruption, uncertainty and opportunity.

The development of technology is transforming every sphere of our lives. Societies, government and organizations are seeing change at a velocity which has not been witnessed in the history of mankind. The rules of every industry are being rewritten while geopolitics, regulations and globalization are making the world an increasingly complex place to survive and succeed. No one is exempt - including our industry.

Poised at the crossroads of change stands an opportunity. It is called India.

A sixth of world's population is expected to grow at a pace faster than the rest of world. 1.25 billion Indians have unprecedented energy and hunger to make a difference to this planet. It is also said that the world is coming to India and will continue to do so for some time. As a country, we are excited and humbled by this massive opportunity. We are also looking forward to the benefits coming from this. This massive tide of opportunity has the potential to lift everybody, promising to benefit individuals, organizations and entire industries.

If India continues on its present growth course, it could have a US$5 trillion economy within the next 20 years. But our national ambition is to perhaps go beyond that. This journey will have its own set of challenges and will not guarantee growth by default. Multiple industries and sectors will evolve in parallel and not always in synergy. At the same time India’s ambition will also collide with global realities at play. In this context, resilience will take a whole new meaning for India.

It is now our time to create our unique point of view in our domain and shape the future of our profession. Navigating this means constant exploration of the strands, fragments, dynamics and even contradictions that form a part of this unfolding narrative. We are grateful to the Business Continuity Institute for helping us start the 20/20 journey in India.

India is one of the BCI’s growth markets, and David Thorp, Executive Director at the BCI, recently said: “the 20/20 Groups are an integral part of the BCI’s aim to shape the future of our profession. This is a space to engage in provocative thought, play with ideas and engage with fellow experts. India holds so much potential in leading our profession and shaping future practice, and I’m counting on the India 20/20 Group to bring out those ideas worth spreading to the rest of the world.”

As such, thought leadership has never been more relevant. We are looking for fresh, powerful thought leadership for the India 20/20 Group, and our mission to achieve this is based on three core beliefs.

  • New thinking can create a better future for business continuity
  • Our ideas have the power to influence global paradigms on business continuity
  • Modern day thought leadership requires a broad base, engaging a more global audience

We believe the challenge for us is how to lead our audiences and stakeholders so that they can explore, alter and shift their understanding about India’s business continuity story and also contribute to the global Think Tank of business continuity professionals.

As leading practitioners of business continuity in India, I invite you to be part of this exciting journey. If you are curious and are able to open new paths through your expertise we can enable you to explore, engage and launch your own thought leadership. If you would like to find out more about the BCI 20/20 Think Tank India Group, or if you would like to submit your interest in becoming a part of it, just click here.

Together let us take business continuity forward.

Arunabh Mitra MBCI is the Chief Continuity Officer of HCL Technologies. He is involved in the leadership of the BCI Hyderabad Forum and also leads the BCI 20/20 in India.


Director of Technical Marketing, iland

It only takes one storm to change one’s life and business. Natural disasters strike with little to no warning and can be devastating to an organization’s operational and economic infrastructure. In today’s world of 24/7/365 always available business, if a business is down, even for a few hours, customers will not wait for recovery. They will find someone else and continue their business. The total cost of not just downtime but brand, reputation, and lost customers is monumental. The Federal Emergency Management Agency (FEMA) estimates that 40 percent of businesses do not reopen after a disaster and an additional 25 percent fail within one year. This failure rate is primarily due to business’ fundamental lack of preparedness.

Hurricane season officially began on June 1st and the National Oceanic and Atmospheric Administration is expecting the 2017 season to be above average. Businesses should prepare for the hurricane season by educating their employees and examining what best practices need to be adopted to maintain business continuity through disaster situations.

IT business continuity and disaster recovery in today’s world is no longer shipping backups to another location. In the event of a hurricane, entire infrastructures can be down for hours or days. What happens if power to the building is out for three days? What happens if there is no internet for a week? In extreme cases, actual damage to equipment can happen. Insurance can recover the cost of physical damage, but your business needs to be up and running as soon as something happens.

True disaster recovery and business continuity means the ability to quickly and reliably be running within minutes somewhere else. This doesn’t mean restoring a backup to a server standing by in some closet in another state, but actually moving entire operations near real-time and continuing the business. This can be done with a variety of cloud and software services that provide up-to-the-minute changes being moved to this secondary location. In an ideal situation, businesses can actually fail over their operations with little disruption ahead of any storm hitting. Once business is up and running in a safe location, the focus can return to employee and community safety knowing that business needs are already taken care of.

When crafting a business continuity plan for an IT organization, I suggest a five-step approach. Step one, understand the technology options available. Are backups sufficient or is a true disaster recovery (DR) solution needed? Step two involves categorizing IT systems. Which systems are most critical to day-to-day operations? Often, organizations will take a hybrid approach to data protection, employing a DR solution for mission-critical applications while protecting less-critical applications with backups. Step three is implementation. When considering implementation of a business continuity solution, consider how – and how much – to invest. Once assessed what it will take to keep the business running, pair it with the appetite for on premise investment, capital and operating expenses, and ongoing management. Step four is to build the business continuity plan. At this point there are a number of decisions to be made – what sorts of situations constitute a disaster for the company? Who can declare a disaster and enact the plan? What are the formal procedures?

The final step is testing. Just like a test evacuating a building for an emergency, a test of resilient IT infrastructure is important to not only gain confidence that it will work but to gain an understanding of how to accomplish complete business failover so, that in the case of a disaster, it really is as simple as clicking a few buttons.

Unlike sudden natural disasters such as tornadoes or earthquakes, hurricanes do allow you some lead time in order to enact a plan. What can’t be planned for is how long it will be before you can have your data center up and running again. Make sure wherever operations are running – be it a secondary location or a third-party cloud – it meets performance, security, and compliance needs.

Here are two examples of customers who understand their risk and have enabled true business continuity solutions in their environment.

Woodforest National Bank Finds a Summertime Home for Its Data

Woodforest National Bank, headquartered in Houston, Texas, experienced disaster during Hurricane Ike in 2008 losing power at its primary datacenter causing it to remain on generator power for 10 full days. Not wanting to experience that level of catastrophe again, its IT team transitioned from disaster recovery to disaster avoidance by pre-emptively “failing-over” all production applications to a secondary site in Austin, TX every June with a planned return to the primary site once hurricane season wraps up at the end of October.

What makes this failover of an entire datacenter a seamless action for Woodforest is its virtualized infrastructure, which operates at 95 percent virtualized level with a hypervisor and Zerto hypervisor-based replication. This combination facilitates a much faster and more error proof DR process, creating a strategy that is prepared to overcome any disaster.

R’Club Strengthens Its DRaaS Plan to Care for Children of First Responders

R’Club Child Care, Inc. is a not-for-profit childcare provider in the Tampa Bay, Florida area that cares for more than 4,000 children of first responders. R’Club’s IT team runs all its servers through on premise VMware and supports more than a dozen applications, all virtualized. While they run Veeam in their environment and back-up systems to a local SAN, they found that utilizing the off-site backup option through Veeam Cloud Connect helped them maintain mission critical IT applications at an affordable cost for the non-profit during times of disaster.

Prior to adopting a secure DRaaS with Veeam solution, R’Club worked with a local partner to lease space for replication with a nearby data center. R’Club used an off-the-shelf NAS device to copy their backups off-site. The process was cumbersome and error-prone, as the device would repeatedly fail and required rebooting. Further, off-site backups didn’t provide the assurance of ongoing availability that R’Club required. It would take hours or days to recover a system – and with their charter supporting first responders in the hurricane zone in Florida, that was time they couldn’t afford.

Now, if R’Club’s data center is swept away by a hurricane, the service provider can restore data through its BaaS operation.

Careful planning and understanding worst-case scenarios for business can help organizations build a comprehensive business continuity plan and disaster recovery strategy. Many companies have good intentions and start these plans, but fail to follow through. Now is the time to reflect on what is in place and consider if the current DR plan will get businesses through an unplanned disaster.

Before beginning a discussion on human-centered risk, it is important to provide context for why we must consider new ways of thinking about risk.  The context is important because the change impacting risk management has happened so rapidly, we have hardly noticed.  If you are under the age of 25, you take for granted the internet as we know it today and the ubiquitous utility of the World Wide Web.  Dial-up modems were the norm, and desktop computers with “Windows” were rare except in large companies.  Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.

What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks.  Is it possible that risks and the processes for measuring risk should remain static?  Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future?  Why are qualitative self-assessments still a common approach for measuring disparate risks?  More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?



(TNS) - If you’ve seen one hurricane, you’ve seen them all, right? Wrong.

Even the most seasoned Gulf Coast residents have something to learn about the amazingly complex and destructive storms that are hurricanes.

That’s why we asked an expert meteorologist to share his knowledge. Rocco Calaci is a partner and chief meteorologist at a weather technology company called MetLoop. He’s been studying the weather for 46 years now, and his daily email on Gulf Coast weather has thousands of readers.

Here’s six common misconceptions he said people have about hurricanes:



Page 1 of 4