Fall World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 30, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (594)

(TNS) - Fire Chief Steve Achilles acknowledged many city residents might not know Portsmouth's (N.H.) hazard mitigation plan exists.

"It's the kind of thing that they might never see, but people can take comfort knowing that we're thinking about these things," Achilles said this week.

City officials recently released the 2017 draft update of the plan, which was put together by several city officials, including Achilles, Deputy City Manager Nancy Colbert Puff and other fire, planning and Public Works staff.

"It's a document that the city has had for as long as I've been with the Fire Department and it gets updated every five years," Achilles said. "It's looking at how to reduce and mitigate hazards ahead of time to minimize the impact of natural disasters."

One key part of the plan is to identify what natural hazards Portsmouth could face, he said.



Cyber security software vendor Symantec today emerged as the only known western technology company to publicly refuse Russian government access to source code for its security products.

IBM, Cisco, Germany's SAP, Hewlett Packard Enterprise and McAfee are among the firms that allowed Russia to conduct source code reviews of products, including firewalls, anti-virus applications and other encrypted software, according to a new investigative report from Reuters.

The reviews – intended to protect Russia against cyber espionage – are conducted by the country’s Federal Service for Technical and Export Control (FSTEC), and the Federal Security Service (FSB), successor to the KGB and the agency blamed for attacking the 2016 U.S. Presidential election.



The enterprise has made great strides in curbing its appetite for energy over the past decade, but will this ultimately be a losing battle as demand for data continues to rise?

According to a recent report from the Lawrence Berkeley National Laboratory, the number of data centers coming online has seen a dramatic uptick in the past few years as organizations struggle to meet the always-on demands of an increasingly connected population. But the good news is that due to virtualization, low-power/high-density architectures and other developments, actual energy consumption has been flat. This is in stark contrast to the first decade of the new century, which saw energy demand jumping as high as 90 percent per year.

Still, leaders in the data center industry are concerned that the big gains in energy efficiency are over but the relentless demand for data, fueled in part by rapidly falling costs that are themselves the result of more efficient infrastructure, will put the industry on the fast track to dramatically higher consumption in relatively short order. At a meeting sponsored by DCD Energy this week, Donald Paul, of the University of Southern California’s Energy Institute, noted that once data centers approach a PUE of 1.0, there are no more gains to be had, since you cannot achieve more than 100 percent efficiency. And programs that encourage enterprises to reduce demands on the local grid also encourage the use of mostly diesel-power backup systems.



Many business continuity professionals can attest to the tension that often occurs between the business and IT when it comes to recovery capabilities. For example, Company X recently implemented a business continuity program, including determining recovery time objectives (RTOs) for key business processes. Like all well-established business continuity programs, the business impact analysis (BIA) considered the loss of technology and helped the company develop recommended recovery time (and recovery point) objectives for technology resources. The business documented and presented these RTOs to management following the initial BIA, but never followed up with IT to ensure that the capabilities could be met.

Meanwhile, IT leveraged its own application/system list and related recovery information to prioritize applications for recovery and drive the implementation of a disaster recovery solution that was cost-effective and aligned with IT’s conclusions of business requirements for recovery (created from data outside the BIA). Both the business and IT feel confident in their work; yet, neither have communicated with the other. Given that the groups have not undergone a joint exercise (or actual disruption), neither group is aware of the underlying gap: Recovery priorities and strategies are misaligned between the business and IT.



The Business Continuity Institute

Building resilience by improving cyber security, published by the Business Continuity Institute during Business Continuity Awareness Week, revealed that users are often choosing weak passwords and so leaving their IT networks vulnerable, and this vulnerability has now been realised at the UK Houses of Parliament. Over the weekend, Parliament experienced what was described as a sustained and determined cyber attack that forced remote access to be restricted for Members of both Houses, as well as their aides.

A senior spokesperson for Parliament commented: "We have discovered unauthorised attempts to access accounts of parliamentary networks users and are investigating this ongoing incident, working closely with the National Cyber Security Centre. Parliament has robust measures in place to protect all of our accounts and systems, and we are taking the necessary steps to protect and secure our network."

It was reported that the attack, which began last Friday, was specifically trying to identify weak passwords and gain access to users' email accounts. Ultimately this was successful with less than 1% of accounts, but this still amounts to about 90 people, and potentially results in sensitive data being exposed.

International Trade Secretary Liam Fox said: "We have seen reports in the last few days of even cabinet ministers' passwords being for sale online. We know that our public services are attacked so it is not at all surprising that there should be an attempt to hack into parliamentary emails. And it's a warning to everybody, whether they are in Parliament or elsewhere, that they need to do everything possible to maintain their own cyber security."

While the restriction of remote access seems to have abruptly and effectively ended the attack, it left many Parliamentarians and their staff without access to their emails over the weekend, a time when many of them attempt to catch up with constituency work.

The report published by the BCI highlighted several ways in which users can take responsibility for helping to improve cyber security, and this included the use of strong passwords that cannot easily be hacked or guessed. By doing so it means that everyone can play their part in building a resilient organization.

Jargon crops up everywhere, and business continuity is no exception. RTO, RPO, BIA, and others are often sprinkled liberally into conversations, plans, and reports.

Sometimes expanding the abbreviation makes things clearer to the uninitiated: for example, the terms “recovery time objective” (RTO) for an IT system and “business impact analysis” for BC planning give some hint of what lies behind them.

But what about “recovery point objective” (RPO), also one of the commonest terms used in defining a suitable disaster recovery/business continuity plan? Would we be better off if we banned the use of such jargon?

Banning probably wouldn’t work. For one thing, it would be the curtailing of free speech, and for another, like weeds, jargon would spring up again anyway. We need a better way of managing business continuity jargon, recognizing that it also has its uses.



We all want to know something others don’t know. People have long sought “local knowledge,” “the inside scoop” or “a heads up” – the restaurant not in the guidebook, the real version of the story, or some advanced warning. What they really want is an advantage over common knowledge – and the unique information source that delivers it. They’re looking for alternative data – or “alt-data.”

From the information age where everyone took advantage of easy access to information, we are now entering an age where everyone seeks alternatives: new sources of information and innovative ways of deriving unique insights.  This is the “Age of Alt.”

We know that business leaders want to better leverage data and analytics in their decision-making. But more importantly most decision-makers want to supplement their own data with external data; 81% tell us they want to expand their ability to source new external data.  Demand for data is exploding.



Many technologies are billed as hot, exciting and revolutionary. But which ones are really deserving of that moniker? Which ones are destined to change — or are changing — the storage universe?

Enterprise Storage Forum asked the experts.



Friday, 23 June 2017 15:35

5 Hot Storage Technologies to Watch

What comes to mind when you hear the word “compliance”? Do you shiver, sigh, break out into hives, or all three? Believe it or not, your compliance colleagues are crucial to your social marketing success. This is especially true for marketers in regulated spaces such as financial services, healthcare, and pharmaceuticals. I can share from personal experience that my social marketing success at American Express was in part due to the relationships I fostered with compliance, legal, and even outside legal counsel — in fact, I’m still in touch with those former colleagues. Given the importance of breaking down the marketing compliance silo, I partnered with my colleague Nick Hayes on a new report, . And though the intention of this report is to help marketers in regulated industries, Nick and I both agree that all marketers can benefit from it.



We all make mistakes, and CCOs are no exception. While CCOs are a creative and dedicated bunch, they are often susceptible to these five common mistakes. Probably unsurprisingly, the cure for these ills is more due diligence and more relationship building.

Chief Compliance Officers are fallible – I know that is not a controversial statement. To err is human, and CCOs are members of the human species.

With the enormous expectations placed on CCOs’ shoulders, they are bound to make some mistakes. I have seen CCOs who have run into difficulties, and occasionally they have contributed to the problem through their own behaviors.

I thought I would identify some of the common mistakes I have seen. It is hard to generalize, but I have observed some common themes.



The Business Continuity Institute

One in ten small business owners and employees are regularly putting the security of their data at risk by sharing confidential files on personal devices, or sending documents to personal rather than work emails. This demonstrates a significant lapse in data security among the UK’s five million plus small businesses.

The study by Reckon also found that a quarter of small business owners (25%) and their teams save documents onto their desktops rather than a central server. This also means there is less likelihood of the data being backed up, so should a computer failure occur then the data could be lost. These statistics were just as prevalent in larger SMEs, those with a turnover of £10 million or more, as the findings showed that the same 10% of these larger businesses sent documents to personal devices and a third saved documents on desktops rather than central servers.

"We believe the reasons behind these data breaches may include ease of access when working remotely, and keeping documents to hand rather than sorting through mismanaged folders," said Mark Woolley, Commercial Director at Reckon.

Sending and saving documents incorrectly and to personal devices breaches basic data security guidelines and could even put employers and employees at risk of breaching data protection laws. Such practices also place confidential information at risk of hacks or unauthorised use, and also mean that employers cannot provide complete audit trails of documents within their own business.

It’s concerning that so many SMEs in the UK are ignoring basic data protection rules. The findings are especially worrying where SME owners are involved, as they are placing their own organization’s sensitive information at risk. Incorrectly managing data and information in this way can pose financial, reputational and security issues to a business, something that no business owner wants to have to deal with.

Cyber security is as much of an issue for SMEs as it is for larger organizations according to the Business Continuity Institute's latest Horizon Scan Report which showed that organizations of all sizes share the same concerns. A global survey identified the top three concerns for both SMEs and large organizations as cyber attack, data breach and unplanned network outage.

“Bad habits can easily stick, particularly amongst teams within businesses where there aren’t clear policies around data security,” added Mark Woolley. “I’d urge new businesses to set guidelines around working with documents and emails at the outset in order to give themselves a head start when it comes to keeping information safe. Businesses should also consider that new legislation such as the General Data Production Regulation will incorporate additional data security into law, making adhering to basic practices of vital importance."

The Business Continuity Institute

Cyber attackers are relying more than ever on exploiting people instead of software flaws to install malware, steal credentials/confidential information, and transfer funds. A study by Proofpoint found that more than 90% of malicious email messages featuring nefarious URLs led users to credential phishing pages, and almost all (99%) email-based financial fraud attacks relied on human clicks rather than automated exploits to install malware.

The Human Factor Report found that business email compromise (BEC) attack message volume rose from 1% in 2015 to 42% by the end of 2016 relative to emails bearing banking Trojans. BEC attacks, which have cost organizations more than $5 billion worldwide, use malware-free messages to trick recipients into sending confidential information or funds to cyber criminals. BEC is the fastest growing category of email-based attacks.

“Accelerating a shift that began in 2015, cyber criminals are aggressively using attacks that depend on clicks by humans rather than vulnerable software exploits - tricking victims into carrying out the attack themselves,” said Kevin Epstein, vice president of Proofpoint’s Threat Operations Center. “It’s critical that organizations deploy advanced protection that stops attackers before they have a chance to reach potential victims. The earlier in the attack chain you can detect malicious content, the easier it is to block, contain, and resolve.”

Someone will always click, and fast. Nearly 90% of clicks on malicious URLs occur within the first 24 hours of delivery with 25% of those occurring in just ten minutes, and nearly 50% of clicks occur within an hour. The median time-to-click (the time between arrival and click) is shortest during business hours from 8am to 3pm EDT in the US and Canada, a pattern that generally holds for the UK and Europe as well.

Watch your inbox closely on Thursdays. Malicious email attachment message volume spikes more than 38% on Thursdays over the average weekday volume. Ransomware attackers in particular favor sending malicious messages Tuesday through Thursday. On the other hand, Wednesday is the peak day for banking Trojans. Point-of-sale (POS) campaigns are sent almost exclusively on Thursday and Friday, while keyloggers and backdoors favour Mondays.

Attackers understand email habits and send most email messages in the 4-5 hours after the start of the business day, peaking around lunchtime. Users in the US, Canada, and Australia tend to do most of their clicking during this time period, while French clicking peaks around 1pm. Swiss and German users don’t wait for lunch to click, their clicks peak in the first hours of the working day. UK workers pace their clicking evenly over the course of the day, with a clear drop in activity after 2pm.

The Business Continuity Institute

The United Nations Office for Disaster Risk Reduction has claimed that climate change is greatly increasing the likelihood of devastating wildfires, such as the one that burned its way across Portugal last weekend but is now reported to be under control.

More than 60 fires broke out in a densely forested area near the small town of Pedrógão Grande, 200km north-east of Lisbon, killing more than 60 people, in what Portuguese Prime Minister Antonio Costa described as the country’s “greatest human tragedy in living memory."

Dr Robert Glasser, the United Nations Special Representative of the Secretary-General for Disaster Risk Reduction, urged countries to integrate climate change risk in their fire prevention and response planning, commenting that "the fire highlights the urgency of global efforts to reduce greenhouse gases as quickly as possible."

Organizations in regions where wildfires are a possibility need to consider how they would respond to such an incident, or any incident that could result in the loss of facilities, danger to staff, or the evacuation of people from the region. Actions that need to be thought through are how to communicate with staff, or other stakeholders, during the event, primarily to ensure their safety, but also to liaise with them about alternative work arrangements . If facilities have been damaged then they will need to consider where staff can work both in the short-term and the long-term, bearing in mind that staff may not want to work in the short-term as the organization is unlikely to be their top priority.

Adverse weather, which can lead to the conditions that cause and spread wildfires, such as no rainfall, high temperatures and strong winds, featured fifth in the list of concerns that business continuity professionals have, as identified in the Business Continuity Institute's latest Horizon Scan Report. Climate change is not yet considered an issue however, as only 23% of respondents to a global survey considered it necessary to evaluate climate change for its business continuity implications. given this latest statement from UNISDR, perhaps now is the time to start giving it greater consideration.

A new study published in Nature Climate Change found that 30% of the world’s population is currently exposed to potentially deadly heat for 20 days per year or more.

Heavy rainfall due to Tropical Storm Cindy is expected to produce flash flooding across parts of southern Louisiana, Mississippi, Alabama, and the Florida Panhandle, according to the National Hurricane Center (NHC).

Total rain accumulations of 6 to 9 inches with isolated maximum amounts of 12 inches are expected in those areas, the NHC says.

On Tuesday, Alabama Governor Kay Ivey declared a statewide state of emergency in preparation for severe weather and warned residents to be prepared for potential flood conditions.

FEMA flood safety and preparation tips are here.



MSPs know that customers expect both scale and economics when it comes to the cloud.

For most, this means public cloud options like AWS, Google and Azure.

The subtitle for RightScale’s “2017 State of the Cloud Report” says it all: “Public cloud adoption grows as private cloud wanes.”

Public cloud services dominate news cycles for enterprise IT, and on the surface, the numbers seem to align with this narrative: organizations are increasingly leveraging public and hybrid cloud, while private cloud use feels like part of a forgotten era.



Attack sophistication is growing. 20 years ago, social engineering had already made inroads and automated attacks were on the rise, with denial-of-service, browser executable attacks, and techniques for uncovering vulnerabilities in the binary code of applications.

Today, attacks are bigger, faster, and deeper, ranging from blended (cyber-physical) attacks and malicious counterfeit hardware, to entire supply chain compromises and adaptive attacks on critical infrastructure.

Yet in another sense attacks are on a downward trend, possibly giving enterprises and individuals a better chance of protection.



As virtualization becomes the norm, the risk of virtualization should be in the forefront of any business continuity manager’s mind.  We’ve compiled a list of areas of concerns and controls to reference throughout your virtualization transitions.

As organizations adopt and expand the use of cloud computing (e.g., software as a service – SaaS, infrastructure as a service – IaaS), most do not consider the acceptance of virtual infrastructure to be a major risk. Virtualization is the norm, and physical-based servers and storage are the exceptions. Nevertheless, you must consider the risks associated with your virtual environment as part of your overall risk assessment.



Wednesday, 21 June 2017 14:48

The Risk of Virtualization

In the highly competitive market for security tools, many vendors make the misleading claim of having the best of everything, and at this point in time "everything" often refers to data science, machine learning and AI. The result is an arms race of claims about tools that  “automagically” address security problems, according to Forrester Research.

In its recent report "The Top Security Technology Trends to Watch, 2017," the analyst firm called out the battle of the data science algorithms, saying, "When virtually every security vendor makes the claim that they’re using artificial intelligence or machine learning for detection, security decision makers are left shaking their heads, trying to figure out what’s real and what’s not."

Still, decision makers need solutions. And it should be noted that, "data science has been part of cybersecurity for as long as there has been a category called cybersecurity. Machine learning and artificial intelligence do have roles to play in security, but they are not a panacea for the prevention of all cyberattacks," the report said.



The Business Continuity Institute

The average cost of a data breach is $3.62 million globally, a 10% decrease from the 2016 results, according to IBM's latest Cost of Data Breach Study, conducted in collaboration with the Ponemon Institute. This is the first time since the global study was created that there has been an overall decrease in the cost. On average, these data breaches cost companies $141 per lost or stolen record.

For the third year in a row, the study also found that having an Incident Response Team in place significantly reduced the cost of a data breach, saving more than $19 per lost or stolen record. The speed at which a breach can be identified and contained is in large part due to the use of an IRT and having a formal Incident Response Plan.

The Business Continuity Institute's latest Horizon Scan Report identified data breaches as the number two concern for business continuity and resilience professionals, with 81% of respondents to a global survey expressing concern about the prospect of a breach occurring. It cannot be emphasised enough therefore, just how important it is for organizations to have plans in place to respond to such an incident and help lessen its impact.

According to the IBM study, how quickly an organization can contain a data breach has a direct impact on financial consequences. The cost of a data breach was nearly $1 million lower on average for organizations that were able to contain a data breach in less than thirty days compared to those that took longer than 30 days. Speed of response will be increasingly critical as GDPR is implemented in May 2018, which will require organizations doing business in Europe to report data breaches within 72 hours or risk facing fines of up to 4% of their global annual turnover.

"New regulatory requirements like GDPR in Europe pose a challenge and an opportunity for businesses seeking to better manage their response to data breaches," said Wendi Whitmore, Global Lead, IBM X-Force Incident Response & Intelligence Services (IRIS). "Quickly identifying what has happened, what the attacker has access to, and how to contain and remove their access is more important than ever. With that in mind, having a comprehensive incident response plan in place is critical, so when an organization experiences an incident, they can respond quickly and effectively."

While the global study revealed that the overall cost of a data breach decreased to $3.62 million, many regions still experienced an increased cost of a data breach. For example, the cost of a data breach in the US was $7.35 million, a 5% increase compared to last year. However, the US wasn't the only country to experience increased costs in 2017. Organizations in the Middle East, Japan, South Africa, and India all experienced increased costs in 2017 compared to the four-year average costs. Germany, France, Italy and the UK all experienced significant decreases compared to the four-year average costs. Australia, Canada and Brazil also experienced decreased costs compared to the four-year average cost of a data breach.

When compared to other regions, US organizations experienced the most expensive data breaches in the 2017 report. In the Middle East, organizations saw the second highest average cost of a data breach at $4.94 million – more than 10% increase over the previous year. Canada was the third most expensive country for data breaches, costing organizations an average of $4.31 million. In Brazil data breaches were the least expensive overall, costing companies only $1.52 million.

"Data breaches and the implications associated continue to be an unfortunate reality for today's businesses," said Dr. Larry Ponemon. "Year-over-year we see the tremendous cost burden that organizations face following a data breach. Details from the report illustrate factors that impact the cost of a data breach, and as part of an organization's overall security strategy, they should consider these factors as they determine overall security strategy and ongoing investments in technology and services."

The Business Continuity Institute

Why do we have business continuity management programmes? Is it because we want to make sure our organizations have the capability to respond to a disruption? Probably yes! It is common sense that we would want to be prepared for any future crisis.

In some cases however, it is also because there is a legal obligation to do so. Many organizations are tightly regulated depending on what sector they are in or the country they are based, and therefore must have plans in place to deal with certain situations. Furthermore, the rules and regulations that govern us are often being revised, and sometimes it can be difficult to keep up with which ones are applicable.

So how do you know which rules apply to you? The Business Continuity Institute's BCM Legislation, Regulations, Standards and Good Practice publication would be a great place to start.

The BCI does its best to check the validity of the details within this document, but we are reliant on those working in the industry to provide updates. Please help inform our next edition by looking at the current version and advising us of any changes required for your region. If you do come across any inaccuracies then please contact This email address is being protected from spambots. You need JavaScript enabled to view it. to advise him of the required updates.

The Business Continuity Institute

It may not be have been as disruptive or anywhere near as costly as the IT outage that affected BA just a few weeks ago, but many people are still suffering the consequences of an "unforeseen technical fault" that caused Tesco, the world's third most profitable retailer, to cancel a large number of its home deliveries in the UK.

We experienced an unforeseen technical fault which resulted in the forced cancellation of many orders due to a complete system failure. 2/4

— Tesco (@Tesco) June 20, 2017

Many people in the UK have become so reliant on supermarket deliveries that not having to visit the actual store has become a way of life. Having that comfort removed from us is not only a nuisance, it can completely disrupt our busy schedules. Those who had ordered sun-cream for their children to take to school, those who were getting some last minutes orders before heading off to the Glastonbury Festival, those who are simply unable to leave the house, will now have to make alternative arrangements.

Incidents like this aren't rare occurrences. While they may not be commonplace, they occur often enough to warrant featuring in third place in the Business Continuity Institute's latest Horizon Scan Report. Organizations must therefore be prepared to deal with the possibility that one will occur.

For Tesco it could be quite costly, not just in terms of lost revenue, but also in terms of lost reputation as many who had their orders cancelled soon took to Twitter to express their outrage. Some of those people will re-arrange their deliveries, but other will not. Many of them will now shop elsewhere, both on this occasion and in the future if they don't consider Tesco to be reliable. This is why it is important for organizations to make sure they have a plan in place to deal with the consequence of any form of disruptive event.

Imagine going into an outpatient facility for a simple procedure and coming out weeks later confined to a wheelchair. That’s what happened to Mallory Weggemann — who’s now a professional athlete, motivational speaker and writer at The Factory Agency — when she was just 18 years old. How has Mallory overcome adversity and found strength in her disability? Not only personally, but also as a Paralympic swimmer?

In this episode of Resilient, Mallory joins Deloitte Advisory’s Mike Kearney to share her story and discuss why it’s never too late to pick yourself up and make an impact that matters.

“We all have a disability. Everybody has that thing in life that they’re struggling with… We all have to figure out how to navigate through that. How can our disabilities enable us and not disable us?”



Smart cities are the ultimate emerging platform in ways both good and bad. Positives include healthier and happier citizens, more efficient and environmentally responsible communities, and better services to attract and support businesses.

On the flip side, however, smart cities are poorly defined and based on complex technologies,  such as the Internet of Things (IoT), that are just emerging. They demand significant investment and, if projects fail, the host community can be worse off than if the project had not been undertaken in the first place.

So the stakes are high. There was some news on the smart cities front last week. Hitachi Insight Group updated the smart city portfolio that the company included in May 2016. The updates, according to eWeek, are Hitachi Visualization Suite 5.0, Hitachi Smart Camera 200 and Hitachi Digital Evidence Management.



Tuesday, 20 June 2017 14:33

Nothing Small or Easy About Smart Cities

(TNS) - An ammonia leak at Cashmere’s former Tree Top plant led officials to issue a shelter-in-place order for the area for about an hour Sunday afternoon, and to briefly shut down Highway 2.

Chelan County, Wa.,Emergency Management issued the shelter order about 2:50 p.m. for a half-mile radius around the fruit packaging plant, 210 Titchenal Road, and issued the all-clear just after 4 p.m. once the leak was capped.

Washington State Patrol Trooper John Bryant said the leak issued from a 13,000-pound ammonia tank at the Tree Top plant, a former juicing facility which has not operated since the Selah-based company shut it down in 2008.

“As soon as they saw what happened, they attempted to try and vent it, and advised the residents in the area pretty quickly,” Bryant said.



Tuesday, 20 June 2017 14:32

Ammonia Leak Sparks Emergency Response

Emergencies come in many forms: fires, hurricanes, earthquakes, tornadoes, floods, violent storms and even terrorism. In the event of extreme weather or a disaster, would you know what to do to protect your pet?

Many pet owners are unsure of what to do if they’re faced with such a situation. In recognition of National Pet Preparedness Month, here are five steps you can take to keep your pets safe during and after an emergency:

  1. Have a plan – include what you would do if you aren’t home or cannot get to your pet when disaster strikes. You never want to leave a pet behind in an emergency because they, most likely,Pet Preparedness Infographic cannot fend for themselves or may end up getting lost. Find a local pet daycare, a friend, or pet sitter that can get to your pet if you cannot. Make plans ahead of time to evacuate to somewhere that is pet friendly, such as a pet-friendly hotel or a friend or family’s home that is out of the evacuation area.
  2. Make a kit – stock up on food and water. It is crucial that your pet has enough water in an emergency. Never allow your pet to drink tap water immediately following a storm; there could be chemicals and bacteria in tap water so give them bottled water. Also, be sure to stock up on canned food. Don’t forget a can opener, or buy enough pop-top cans to last about a week.
  3. I.C.E. – No, not the frozen kind – it stands for “In Case of Emergency.” If your pet gets lost or runs away during an emergency, have information with you that will help find them, including recent photos and behavioral characteristics or traits. These can help return them safely back to you
  4. Make sure vaccinations are up to date – If your pet needs to stay at a shelter, you will need to have important documents about vaccinations or medications. Make sure their vaccinations are up to date so you don’t have any issues if you have to leave your pet in a safe place.
  5. Have a safe haven – Just like people, pets will become stressed when their safety is at risk. Whether you are waiting out a storm or evacuating to a different area, be sure to bring their favorite toys, always have a leash and collar on hand for their safety, and pack a comfortable bed or cage for proper security. If your pet is prone to anxiety, there are stress-relieving products like a dog anxiety vest or natural stress-relieving medications and sprays that can help comfort them in times of emergency. Ask your veterinarian what would be best for your pet.

Some other things to think about are:

  • Rescue Alert Sticker – Put a rescue alert sticker by your front door to let people know there are pets inside. If you are able to take your pets with you, cross out the sticker and put “evacuated” or another message to let rescue workers know that your pet is safely out of your home.
  • Let pets adjust – Don’t allow your pet to run back into your home or even your neighborhood once you and your family have returned. Your home could be disheveled and things might look different, and these changes can potentially disorient and stress your pet. Keep your pet on a leash and safely ease him/her back home. Make sure they are not eating or picking up anything that could potentially be dangerous, such as downed wires or water that might be contaminated.
  • Microchip your pet – Getting a microchip for your pet could be the difference between keeping them safe and them becoming a stray. Microchips allow veterinarians to scan lost animals to determine their identity so they can be returned home safely. Make sure your microchip is registered and up to date so if your pet gets lost, your information is accessible to anyone who finds your pet.

Resources for Pet Owners

Posted on by Crystal Bruce, Health Communications Specialist, Office of Public Health Preparedness and Response


The Business Continuity Institute
The Business Continuity Institute logotype

Do you want to be the first to discover the findings of our 2017 Cyber Resilience Report? Do you want an opportunity to network with some of your business continuity and resilience colleagues?

Join us for a breakfast briefing on Tuesday 27th June at Liberty House on Regent Street, London where we will launch the 2017 Cyber Resilience Report, produced in collaboration with Sungard Availability Services. The report's author - Patrick Alcantara DBCI, Research and Insight Lead at the Business Continuity Institute - will be revealing many of its key findings, offering some insight into the analysis, and answering any questions you may have.

Cyber resilience is becoming a priority in our business continuity programmes, with cyber attacks and data breaches listed in our 2017 Horizon Scan Report as the top two concerns by those in the industry. If you need any further evidence of the magnitude of a cyber security incident, you only have to look at the disruption the WannaCry ransomware attack caused globally, or perhaps the lack of disruption as many organizations invoked their business continuity plans and dealt with it effectively.

BCI research is key to developing our understanding of the industry, improving knowledge and enhancing organizational resilience. Our reports are frequently referenced by business continuity professionals as part of their planning process, as well as being cited by academics.

Join us on the 27th June to learn more about cyber resilience and what we can do to enhance it.

Register for the event by clicking here.

JEFFERSON CITY, Mo. – Survivors who apply for assistance from the Federal Emergency Management Agency as a result of the federal declaration for flooding from April 28 to May 11, 2017 will receive a letter in the mail from FEMA. The letter will explain the status of their application and how to respond. It is important to read the letter carefully.

Many times applicants need to submit more information for FEMA to continue to process their application.

Examples of missing documentation may include an insurance settlement letter, proof of residence, proof of ownership of the damaged property, and proof that the damaged property was their primary residence at the time of the disaster.

Survivors who have questions about the letter may call the FEMA Helpline at 800-621-3362; go online to www.DisasterAssistance.gov; or visit a disaster recovery center.

To locate the nearest disaster recovery center, they may call the FEMA Helpline; use FEMA app for smart phones; or go online to www.fema.gov/DRC or https://recovery.mo.gov/.

Survivors may appeal FEMA’s decision. For example, if survivors feel the amount or type of assistance is incorrect, they may submit an appeal letter and any documents needed to support their claim, such as a contractor’s estimate for home repairs.

If survivors have insurance, FEMA cannot duplicate insurance payments. However, if they are underinsured they may receive further assistance for unmet needs after insurance claims have been settled.

How to Appeal a FEMA Decision

All appeals must be filed in writing to FEMA. Survivors should explain why they think the decision is incorrect. When submitting the letter, they should include:

  • Full name
  • Date and place of birth
  • Address of the damaged dwelling
  • FEMA registration number

In addition, the letter must either be notarized – if they choose this option, they should include a copy of a state-issued identification card – or include the following statement, “I hereby declare under penalty of perjury that the foregoing is true and correct.” The survivor must sign the letter. 

If someone other than the survivor or the co-applicant is writing the letter, there must be a signed statement affirming that the person may act on their behalf. The survivor should keep a copy of the appeal for their records.

To file an appeal, letters must be postmarked, received by fax, or personally submitted at a disaster recovery center within 60 days of the date on the determination letter.

By mail:

FEMA – Individuals & Households Program
National Processing Service Center
P.O. Box 10055
Hyattsville, MD 20782-7055

By fax:
Attention: FEMA – Individuals & Households Program

If survivors have any questions about submitting insurance documents, proving occupancy or ownership, or anything else about their letter, they may call the FEMA Helpline at 800-621-3362. Those who use 711 or Video Relay Services may call 800-621-3362. Those who use TTY may call 800-462-7585; MO Relay 800-735-2966; CapTel 877-242-2823; Speech to Speech 877-735-7877; VCO 800-735-0135. Operators will be available from 6 a.m. to 10 p.m. seven days a week until further notice.

FEMA and Missouri’s State Emergency Management Agency (SEMA) are committed to ensuring services and assistance are available for people with disabilities or others with access and functional needs. When they register, they should let FEMA staff know that they have a need or a reasonable accommodation request.

The federal disaster declaration covers eligible losses caused by flooding and severe storms between April 28 and May 11, 2017 in these counties: Bollinger, Butler, Carter, Douglas, Dunklin, Franklin, Gasconade, Howell, Jasper, Jefferson, Madison, Maries, McDonald, Newton, Oregon, Osage, Ozark, Pemiscot, Phelps, Pulaski, Reynolds, Ripley, Shannon, St. Louis, Stone, Taney, and Texas.

Monday, 19 June 2017 14:13

Understanding the FEMA Letter

CHICAGO – Summer is finally here, and while that means fun in the sun, it can also bring the threat of dangerous storms. In recognition of Lightning Safety Awareness Week, the Federal Emergency Management Agency’s Region 5 office wants you to learn how to reduce your lightning risk while outdoors.

“If you hear thunder, lightning is close enough to pose an immediate threat,” said FEMA Region V Acting Administrator Janet M. Odeshoo. “Seek shelter as quickly as possible. There is no place outside that is safe when a thunderstorm is in the area.”

Substantial buildings such as offices, schools, and homes would offer good protection. Once inside, stay away from windows and doors and anything that conducts electricity such as corded phones, wiring, plumbing, and anything connected to these. If you are caught outside with no safe shelter anywhere nearby, the following actions may reduce your risk:

  • Never shelter under an isolated tree, tower or utility pole. Lightning tends to strike the taller objects in an area.
  • Immediately get off elevated areas such as hills, mountain ridges or peaks.
  • Immediately get out and away from ponds, lakes and other bodies of water.
  • Stay away from objects that conduct electricity, including wires and fences.
  • Never lie flat on the ground.

The best way to protect yourself against lightning injury or death is to monitor the weather and postpone or cancel outdoor activities when thunderstorms are in the forecast. Lightning can strike from 10 miles away, so if you can hear thunder, you are in danger of being struck by lightning.

For additional information on lightning safety—wherever you may be this summer—visit www.ready.gov/thunderstorms-lightning. You can find more valuable storm safety tips by visiting www.lightningsafety.noaa.gov.  Consider also downloading the free FEMA app, available for your Android, Apple or Blackberry device, so you have the information at your fingertips to prepare for severe weather.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

The Business Continuity Institute

We are used to assessing what the immediate threats are to our organizations as those threats are happening right now. Organizations across the world are suffering from adverse weather, cyber attacks, supply chain failures and technical failures. They may not affect our own organizations straight away, but with the increasing dependence on other organizations in this matter, they probably will do in the near future.

But what about the long-term future? Organizational strategies are often looking beyond the short-term with five-year plans or even ten-year plans in place. So when we consider business continuity and resilience, should we be looking further ahead as well? Should we be assessing what the megatrends are that our organizations need to be preparing for now?

Megatrends are seen as the large social, economic, political, environmental or technological changes that occur over the long-term, changes that have the potential to profoundly shape the way we work and live our lives. Climate change, and everything it entails, is one such megatrend that could, or perhaps already is, having a major impact on our organizations.

The Business Continuity Institute is delighted to be collaborating with Siemens on a new study that will look at how organizations build resilience across the board, and what they think about climate change as one of these megatrends. You can help inform this study by taking a few minutes to complete the survey, and be in with a chance of winning a €100 Bol.com gift card.

This study is primarily looking at responses from the Benelux region, but input would be welcome from elsewhere in order to help make comparisons.

Having workloads distributed across multiple clouds and on-premises is the reality for most enterprise IT today. According to research by Enterprise Strategy Group, 75 percent of current public cloud infrastructure customers use multiple cloud service providers. A multi-cloud approach has a range of benefits, but it also presents significant challenges when it comes to security.

Security in a multi-cloud world looks a lot different than the days of securing virtual machines, HashiCorp co-founder and co-CTO Armon Dadgar said in an interview with ITPro.

“Our view of security is it needs a new approach from what we’re used to,” he said. “Traditionally, if we go back to the VM world, the approach was sort of what we call a castle and moat. You have your four walls of your data center, there’s a single ingress or egress point, and that’s where we’re going to stack all of our security middleware.”



Musings of a Cognitive Risk Manager

To drive change, you need buy-in, and to achieve buy-in, your people need to know the “why” behind the change. This is the premise behind cognitive risk governance, the “designer” of human-centered risk management. James Bone, author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind, further explains the cogrisk framework.

In my last article, I explained the difference between traditional risk management and human-centered risk management and began building the case for why we must re-imagine risk management for the 21st century.  I purposely did not get into the details right away, because it is really important to understand why a thing must change before change can really happen.  In fact, change is almost impossible without understanding why.

Why put on sunscreen if you don’t know that skin cancer is caused by too much exposure to ultraviolet rays from the sun?  We know that drinking and driving is one of the deadly causes of highway fatalities, but we still do it!  Knowing the risk of a thing doesn’t prevent us from taking the chance anyway.  This is why diets are so hard to maintain and habits are so hard to change.  We humans do irrational things for reasons we don’t fully understand.  That is precisely why we need cognitive risk governance.



Every so often it’s good to shake things up. Sometimes the simple act of asking questions about what we do in business continuity and why we do it can give us a fresh point of view and point out areas for improvement.

The venerable business impact analysis (BIA) is a case in point. Do you produce a BIA because it helps you optimise business continuity and its cost-effectiveness?

Or do you have one because the auditors ask for it and it’s part of the process? Adaptive BC challenges business continuity managers on this and several other important points.

It’s a fact that we lose focus at some point while we are working. The loss of focus can range a few seconds to several years.



The Business Continuity Institute

Many organizations don’t devote enough attention to mission-critical applications when creating disaster recovery (DR) plans, and one of the biggest reasons is the 'resiliency perception gap', or the gap between executives’ perceptions of the effectiveness of their resiliency strategies and how successful these plans actually are at protecting against application outages or downtime. This gap can result in lost revenue and damaged brand reputations.

A new Forbes Insights Executive Brief, sponsored by IBM, showed that 80% of respondents fully expect that their disaster recovery plans can run their business in the aftermath of a disruption. Yet this confidence is questionable. Less than a quarter of these same executives say they include all critical applications in their DR strategies, which means 78% of enterprises face unplanned and unnecessary risks for these essential resources.

Business resiliency: now’s the time to transform continuity strategies also noted that gaps exist in management and governance activities, with 61% of executives saying that business continuity, disaster recovery and crisis management are siloed rather than administered as they should be - an interrelated whole.

Many organizations don’t have the means, or the desire, to fully protect critical assets as nearly three-quarters (73%) of surveyed executives pointed to shortfalls in funding and other resources as impediments to covering all critical applications within DR programmes. In addition, another quarter of executives don’t even consider it essential to cover 100% of their critical applications.

Outdated runbooks are common as more than half of enterprises (58%) go almost a year, sometimes longer, between tests of their business continuity and DR plans. Only 28% of companies run assessments monthly. As a result, nearly half of the executives (47%) say that DR drills or actual events showed the runbook was out of sync. Almost half (46%) of the executives surveyed say testing disrupts their organizations, and the cost of running tests keeps another quarter from testing more frequently.

There is often an over-reliance on manual processes as DR strategies aren’t becoming automated as quickly as production processes, leaving nearly a third (31%) of enterprises struggling with manual DR resources. Even many of the more mature organizations have only pockets of automation.

“Clearly, many executives don’t realize the full extent of risks they’re running,” said Bruce Rogers, Chief Insights Officer at Forbes Media. “And tight budgets force many to make trade-offs.”

“Clients today demand IT recovery solutions that are designed for complex hybrid cloud environments to restore their confidence and meet their business needs,” said Chandra Sekhar Pulamarasetti, Co-Founder and CEO of Sanovi Technologies and VP Cloud Resiliency Orchestration Software and Services at IBM. “Cyber attacks and other threats require innovative business resiliency plans that are orchestrated to anticipate problems and reduce risk, cost, and downtime in the process.”

The Business Continuity Institute

By gavnosis (http://www.flickr.com/photos/gavnosis/2548307698/) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Only a month after the WannaCry attack that affected about 250,000 networks across the world, it seems that ransomware is back in the headlines again with an attack on University College London, one of the largest universities in the UK with over ten thousand employees and nearly forty thousand students, and considered to be the seventh best university in the world. The attack affected its internal shared drives, and resulted in several NHS Trusts in the UK shutting down their own servers as a precaution.

UCL first reported the attack at the end of the day on Wednesday with the Information Services Division posting that "UCL is currently experiencing a widespread ransomware attack via email. Ransomware damages files on your computer and on shared drives where you save files. Please do not open any email attachments until we advise you otherwise. To reduce any damage to UCL systems we have stopped all access to all N: and S: drives. Apologies for the obvious inconvenience this will cause."

To help reassure those at the university who rely on access to the shared drives, ISD later added that "We take snapshot backups of all our shared drives and this should protect most data even if it has been encrypted by the malware. Once we are confident the infections have been contained, then we will restore the most recent back up of the file."

Having an effective back-up programme is one of the best ways to protect against the impact of a ransomware attack. If data is backed-up and the organization experiences a ransomware attack then they can isolate the ransomware, clean the network of it, and then restore the data from the back-up. It’s not necessarily an easy process, but it means they don’t lose all their data and they don’t pay a ransom.

Unlike WannaCry which was reported to have infected systems using out of date software, this attack was the result of users clicking on a malicious link. First it was reported to be the result of a phishing email, but later it was confirmed to be the result of users accessing a compromised website. Either way, it is this type of activity that featured so prominently during Business Continuity Awareness Week, with a report published by the Business Continuity Institute demonstrating that each and every one of us can take simple steps to improve cyber security, and one of those steps was to exercise more caution when clicking on links.

"It is encouraging to see that once again the potentially damaging impact of a cyber attack has been prevented by UCL having processes in place to deal with the threat," said David Thorp, Executive Director of the BCI. "This is business continuity in action, and while it may not prevent the disruption in its entirety, it ensures that it does not escalate further into a crisis."

Migrating from one computing platform to another can and should cause pause. It is important to be prudent when deciding whether to migrate, and to which system. Total cost of acquisition (TCA) often becomes a tipping point, but the ongoing total cost of ownership (TCO) should factor into this equation. This is especially true when both the TCA and TCO of IBM Power systems servers are considered. Low-end scale out Power servers are competitively priced and offer a lower TCA against x86 servers running Linux. When integrated database, security, work management, support for multiple operating systems and high-availability resources are factored in the TCO for larger systems, it’s clear that Power systems servers offer the better value.

The Past, Present and Future of POWER

Some organizations consider Power systems servers outdated, as the platform was first introduced in 1979 as the System/38. However, more than 150,000 companies that have embraced this technology consider this to be a benefit rather than a detriment, with the platform representing decades of on-going enhancements. This leads up to current POWER8 processor technology and, as part of an ongoing development roadmap, with POWER9 servers scheduled to be announced in 2017.

The Benefits of IBM i on POWER

Interested in a full run-down of Power systems’ features and benefits?

The following whitepaper explores the ways in which IBM is staying ahead of the game with migration, performance, and cost benefits, all while backed by IBM experts ready to support the next generation of Power systems.

In this whitepaper, you will learn:

  • How IBM is staying relevant with the emerging IT workforce.

  • How IBM I reduces workload when upgrading systems.

  • How TCA, TCO and performance compares with x86.

  • The skills and management tools required to run Power systems and where to find them.

Download Why IBM i on POWER to learn more!

Thursday, 15 June 2017 19:13

Why IBM i on POWER

(TNS) — A Vigo, Ind., County official is seeking a renewed focus on getting the county recertified into a higher level of a national rating system in hopes of lowering costs for county residents required to have flood insurance.

“For the past few years, our office has failed in one main area,” said Jared Bayler, executive director of the Vigo County Area Planning Department. “Due to any number of reasons, we are no longer [getting a discount] in the Community Rating System program.” Bayler became executive director in February.

“What this means is we are failing to provide to Vigo County residents discounts on flood insurance programs that are made available to them,” Bayler said.

The National Flood Insurance Program’s Community Rating System is a voluntary incentive program that recognizes and encourages community floodplain management activities that exceed the program’s minimum requirements and encourages a comprehensive approach to floodplain management.



Like many others, we are trying to wrap our heads around the recent British Airways outage, an event so far-reaching and arguably avoidable that it’s difficult to believe such a thing can happen — yet it did. While , this event provides some good lessons for everyone. It’s a reminder that bad things can happen, even to a good organization. You need to be aware of the risks to your own technology and business and defend against them before they harm your business and your customers.

As a rough estimate, .[i] , to say nothing of the reputational damage and other indirect losses. It might take the airline a few quarters to recover fully. Public memory is short, and the beleaguered traveler is forgiving, but a three-day no-show is extreme. BA execs will get to the root-cause analysis soon, but the event (and historical failures at airlines in general) provides a bonanza of lessons for execs everywhere who want to better equip their organizations to handle such exigencies.

Here’s should do:



Fire safety officials around the world are reinforcing prevention and evacuation guidance to high-rise residents following the deadly 24-story apartment building fire at Grenfell Tower in West London.

So far, at least 17 people are confirmed dead in the fire, while close to 80 are hospitalized. UK prime minister Theresa May has ordered a public inquiry into the blaze. Insurance will play a role in the recovery.

Officials say that while catastrophic fires on the scale of Grenfell Tower are statistically rare, awareness is key.



If you are reading this, you either execute data migration or your projects rely on successful migration of data. Regardless of which, I would like to share some insight that I have gathered helping large projects through their data migration exercises. It is NOT a technical discussion, but rather a conversation focused primarily on the business and functional aspects of the data migration exercise.

Does this sound familiar – “The data migration effort had been running well until User Acceptance Testing (UAT), when the business could not validate the data”. This is one of the most common issues facing data migration and one that can grind any progress to a halt.

How about this – “The data migration team is the last one to be staffed, often well after the project is underway, and requirements and designs are being completed before the data migration team sets foot on the ground.”



The Business Continuity Institute

Most business continuity plans pay scant regard to how people might be feeling in the aftermath of a major disruptive incident and simply assume their willingness and ability to drop everything in order to activate those plans.

This assumption might be valid if the incident in question is limited in scope - such as a building, facilities, IT or supply chain issue - and doesn't result in death, injury or personal hardship. But if it's wider-reaching - for instance extreme weather, earthquake, flood, power failure, civil disturbance, terrorist incident or any of a whole host of potential events that affect the wider community - there's a major problem with it.

The fact is that people are likely to be thinking of themselves, their families and their homes, rather than the organization they work for. In which case, the business continuity plan is likely to rank somewhere near the bottom of the list of things on their minds. And their willingness to drop everything and come to the aid of the organization is, perfectly reasonably, likely to be somewhere between low and zero.

Most people have lives, and responsibilities, outside of work. But it's much easier to simply ignore this important fact when creating our business continuity plans, than to worry too much about it. So that's precisely what many planners do. The trouble with this approach, however, is that whilst our plans might look okay on paper, they could well be doomed to failure from the outset if we actually have to put them into operation.

Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management. You can follow him on Twitter and his blog or link up with him on LinkedIn.

Saturday, 17 June 2017 17:37

BCI: Self, self, self...

The Business Continuity Institute

Employees who become distracted at work are more likely to be the cause of human error and a potential security risk, according to a snapshot poll conducted by Centrify at Infosec Europe.

While more than a third (35%) of survey respondents cite distraction and boredom as the main cause of human error, other causes include heavy workloads (19%), excessive policies and compliance regulations (5%), social media (5%) and password sharing (4%). Poor management is also highlighted by 11% of security professionals, while 8% believe human error is caused by not recognising our data security responsibilities at work.

According to the survey, which examines how human error might lead to data security risks within organisations, over half (57%) believe businesses will eventually trust technology enough to replace employees as a way of avoiding human error in the workplace.

Despite the potential risks of human error at work, however, nearly three-quarters (74%) of respondents feel that it is the responsibility of the employee, rather than technology, to ensure that their company avoids a potential data breach.

This ties in closely with the theme for the recently ended Business Continuity Awareness Week, organized by the Business Continuity Institute, which highlighted that users can do more to play their part in cyber security. A report published during the week revealed six simple ways in which they can do this and this included better password control and more caution when clicking on links.

“It’s interesting that the majority of security professionals we surveyed are confident that businesses will trust technology enough to replace people so that fewer mistakes are made at work, yet on the other hand firmly put the responsibility for data security in the hands of employees rather than technology,” comments Andy Heather, VP and Managing Director, Centrify EMEA.

“It seems that we as employees are both responsible and responsible – so responsible for making mistakes and responsible for avoiding a potential data breach. It shows just how aware we need to be at work about what we do and how we behave when it comes to our work practices in general and our security practices in particular.”

The Business Continuity Institute

Airmic has launched a major new study to determine what resilience looks like in a digitally-transformed business world. According to the association, the digital revolution is fundamentally altering the ways in which organizations develop and execute strategy, which will impact business models and their approach to risk and resilience.

Julia Graham, Airmic's technical director and deputy CEO who is leading the project, said: "The digital revolution is moving at a lightning speed and will not only alter the risks our members have to manage, but also the way they have to manage them. At the moment, we - Airmic and the business world as a whole - do not fully understand this process so this project is about taking a leading role in the debate."

According to Airmic, several leading studies have highlighted the speed and disruptive nature of the digital revolution. KPMG's Now or never: 2016 CEO outlook, for example, warns: "The speed of change will be, quite literally, inhuman, as the advancement of data and analytics and cognitive and machine learning drive forward change more quickly than humans alone could ever achieve."

Airmic's study, Roads to Revolution, will be conducted jointly with Cass Business School and published in 2018. It will build upon Airmic's ground-breaking research, Roads to Ruin (2011), which analysed the common underlying causes of corporate failures, and Roads to Resilience (2014), which analysed the common underlying features of resilient businesses.

"Our previous research established what good and bad looked like in terms of organizational resilience, but little is understood about how this will be affected by the current wave of technological advancement," Graham said. "Through case-studies, focus groups and academic analysis, we will shed light on how organizations are transforming their business models and cultures to ensure resilience and growth in the digital age."

The Business Continuity Institute

Organizations are not doing enough to ensure their travel risk strategies are fit for the 21st century realities of business travel and fulfil their legal duty of care, according to a new report published by Airmic.

Travel risk management notes that business travel has grown by 25% over the last decade with businesses sending employees and other people they are responsible for to a wider range of territories, including high or extreme risk regions. They must be able to respond to the many possible factors that could convert even a low-risk destination into a high risk destination in a matter of hours, e.g. health, safety, security, political or social change, and natural disasters.

Businesses have a legal duty of care to protect their employees – which may include contractors and family members – and yet only 16% of Airmic members surveyed have high confidence in their travel risk management framework. To respond to this increased reliance on travel, organizations need flexible and evolving travel risk management strategies that go beyond purchasing travel insurance.

These strategies should respond to the different risks present in different territories and the requirements of the different individuals travelling. Businesses also need reliable sources of relevant intelligence and flexible and pre-rehearsed plans in place to ensure a quick and proportionate response to any crisis impacting its people.

“Sadly every week we are currently reminded why having an effective travel risk management framework in place is imperative. As the tragic events in Westminster, Manchester and more recently on London Bridge and Borough Market demonstrate, any destination can become high risk at an intense speed,” Julia Graham, Airmic’s deputy CEO and technical director, commented.

She added: “I urge all risk professionals to review, update and rehearse how they would respond should such an incident impact their organization. Knowing where your people are and how you can communicate with each other in the event of a crisis is especially important.”

The Business Continuity Institute

Overnight a fire raged through a 24-storey tower block in West London, completely destroying it, and claiming several lives. While this may have been a residential building, the speed with which the fire took hold is a clear warning that organizations must have plans to place to ensure the safety of their staff, as well as other stakeholders, should such an incident occur at work.

As land becomes more expensive, the number of high rise buildings being constructed is increasing all the time, with developers constantly striving to build taller and fit more office space on the same footprint of land. Many offices are also being redesigned to become open plan so an even greater number of people can be squeezed into the same square footage. This can come at a cost however. The taller a building gets, the greater the number of people who work within it, the greater the challenges are to find suitable escape routes for everyone should an emergency arise.

Had this building been an office block, had the fire swept through it in the middle of the day, how quickly could it have been evacuated? How quickly could your organization have made sure that all employees, and everyone else in the building, got out safely?

Some of the residents reported that they were only warned of the fire by other residents, not by the fire alarm system. If the fire alarms didn’t work then it is highly likely that the fire suppression system didn’t work either which is perhaps why the fire spread so rapidly. How frequently do you check the alarm system within your building? Can you say with a high degree of certainty that, if a fire occurred, everyone would be sufficiently warned?

It was also reported that some residents who were trapped in the building had resorted to flashing their mobile phone torches to gain attention and seek help. In desperation, this was all they could do. Organizations must have an effective emergency communications system in place so urgent two-way messages can be sent out to confirm that staff are safe, or, if they are not, then they can be located and made safe as soon as possible.

The safety of staff is paramount to business continuity and making our organizations more resilient. Office space and IT can easily be replicated elsewhere - staff cannot. Not to mention, of course, the moral duty to keep them safe. We must ensure that our buildings are safe environments to work in and that, should the worst happen, staff can safely exit the building. Furthermore, we must make sure that whatever plans, processes and procedures we have in place to safeguard our staff are exercised on a regular basis so any flaws can be found and resolved.

David Thorp
Executive Director of the Business Continuity Institute

Wednesday, 14 June 2017 14:48

BCI: Ensuring the safety of our staff

Natural floods are becoming increasingly common. Our guide will help you be ready, react and recover should a flood hit your area.

When flood strikes, it can wreak havoc in two ways.

First is the immediate damage from the water itself; the second is the long-lasting aftermath.

During the UK’s wettest month on record (December 2015), the Environment Secretary at the time - Liz Truss - estimated that around 16,000 homes were flooded, costing millions in recovery and repairs.

However, she also claimed that more than 20,000 homes were protected due to flood defences that had been put into place.



Wednesday, 14 June 2017 14:46

How to minimise flood damage in your home

Despite ample capital and benign claim cost trends, insurers have held the line on trading profitability for volume, while still responding as needed to emerging trends, according to Willis Towers Watson.

Its most recent Commercial Lines Insurance Pricing Survey (CLIPS) shows that commercial insurance prices in the U.S. were nearly flat in the first quarter of 2017.

Price changes reported by carriers averaged less than 1 percent for the sixth consecutive quarter.



Wednesday, 14 June 2017 14:45

Eye on commercial insurance prices

Disaster situations are uncomfortable to think about. Even the most pessimistic among us have been guilty of avoiding the discussion by saying things like “that won’t happen to us.” But that’s the exact mindset you want to avoid when it comes to protecting your business.  

Everything we do in Business Continuity Planning involves risk impact avoidance, mitigation or acceptance. We cannot prevent outage events from occurring; we can only prepare for how to respond or minimize the impact of risks and outage events. During our speaking engagements, or when talking with clients about potential risks and conditions to plan for, we often hear, “that won’t happen” or “that can’t happen.” While I am not superstitious, there are times I wonder if Mr. Murphy is real. Today I will share some of the events that can’t happen – that did. Have you ever thought:

We are too small for any cyber criminals to target us.



Mitigating Risk to Enhance Data Security

In this article, Jason Allaway, RES Area VP for the U.K. and Ireland, reveals what the true cost of a ransomware attack like WannaCry will be in the GDPR era. As many organizations struggle to prepare for the upcoming regulation, Jason shares the three pillars of risk that must be integrated into organizations’ GDPR strategies to protect and secure sensitive data without hindering productivity.

Over the last few weeks, there have been numerous news stories around the WannaCry ransomware attack and the disruption that it has produced. WannaCry has caused major issues and compromised personal data around the world in a very short period of time.  It was reported that more than 200,000 computers were hijacked in more than 150 countries, with victims including hospitals, banks, telecommunications companies and warehouses.

Today, data is worth a lot of money, and cybercriminals know it. This is one of the key reasons why the EU has established requirements around doing more to protect data from breaches with the impending GDPR legislation. In fact, the GDPR compliance deadline of May 25, 2018 is less than one year away.



I had an epiphany today about a major reason open source is disrupting enterprise software. This is perhaps one of those things that you have heard so much, you've gone numb to it. All the big giants are still alive and kicking, however; so is this really happening? The answer is yes, however the mechanics are not what you think. It is not simply just a cost play. The acquisition - one of the main weapons that big software vendors had to fight disruptors - is losing effectiveness.  And that changes everything. Allow me to explain:

In the past, big vendors bought the smaller potential disruptors and got the code and customers. Cash disrupted the disruptor; investors got paid, and customers got the new technology as part of the big vendor's larger suite. Everyone was happy.



The exciting landscape of modern life has been built with the aid of powerful computers. They have done dazzling things, from making the trains run on time to helping to build skyscrapers. Now, imagine a discontinuity in computing in which these capabilities are suddenly expanded and enhanced by orders of magnitude.

You won’t have to imagine too much longer. It is in the process of happening. The fascinating thing is that this change is based on quantum science, which is completely counter-intuitive and not fully understood, even by those who are harnessing it.

Today’s computers are binary, meaning that they are based on bits that represent either a 1 or a 0. As fast as they go, this is a basic, physical gating factor that limits how much work they can do in a given amount of time. The next wave of computers uses quantum bits – called qubits – that can simultaneously represent a 1 and a 0. This root of the mysteries that even scientists refer to as “quantum weirdness” allows the computers to do computations in parallel instead of sequentially. Not surprisingly, this greatly expands the ability of this class of computers.



Communication; the backbone, or cornerstone if you will, of any successful enterprise.  Without it you can have an organization moving in multiple directions causing confusion and all archways falling down around you when you need to be moving forward as a cohesive unit – especially during times of crisis.  What makes it so key to everything?  Why that particular aspect of the Business Continuity Management (BCM) framework?  It’s because communication is glue that holds more together than a disaster response; though of course, very key to a disaster response. It holds us all together and has done since he first two Homo sapiens caught each other’s eye.

Communication is used on a daily basis; from infancy to adulthood through to our autumn years.  A toddler crying communicates its hunger or discomfort and as we get older we communicate the same thing using words or if we have lost that ability, with beautifully choreographed hand gestures.  And we’re communicating in not just the good times but the bad times as well.  It can comfort us when we are feeling down or enrage us when our dander is up.



Tuesday, 13 June 2017 16:14

BCM & DR: Communication is the Key

The Business Continuity Institute

When looking at the potential threats that could disrupt our organizations, it is often physical or virtual events that we first think of – adverse weather, supply chain failure, cyber attack, pandemic. But while we often consider an event or occurrence to be disruptive, do we also consider a lack of activity to be disruptive? Is there something we’re not doing that could lead to a disruption within the organization? In today's digital world, failure to keep up with technology could be just as damaging as any tropical storm.

A new study by Capgemini and Brian Solis has found that 62% of respondents see corporate culture as one of the biggest hurdles in the journey to becoming a digital organization. As a result, companies risk falling behind competition in today’s digital environment. Furthermore, the data shows that this challenge for organizations has worsened since 2011 by 7 percentage points, when Capgemini first began its research in this area.

The Digital Culture Challenge: Closing the Employee-Leadership Gap uncovers a significant perception gap between the senior leadership and employees on the existence of a digital culture within organizations. While 40% of senior-level executives believe their firms have a digital culture, only 27% of the employees surveyed agreed with this statement.

Cyril Garcia, Head of Digital Services and member of the Group Executive Committee at Capgemini, said: “Digital technologies can bring significant new value, but organizations will only unlock that potential if they have the right sustainable digital culture ingrained and in place. Companies need to engage, empower and inspire all employees to enable the culture change together; working on this disconnect between leadership and employees is a key factor for growth. Those businesses that make digital culture a core strategic pillar will improve their relationships with customers, attract the best talent and set themselves up for success in today’s digital world.”

The findings reveal a divide between senior-level executives and employees on collaboration practices with 85% of top executives believing that their organization promotes collaboration internally, while only 41% of employees agreed with this premise.

Corporate culture is equally as important in the business continuity profession, so much so that it features as one of the six professional practices referred to in the Business Continuity Institute's Good Practice Guidelines. Integrating business continuity into the day-to-day business activities is vital to a successful programme, but this can only be achieved with top management support.

The report highlights that companies are failing to engage employees in the culture change journey. Getting employees involved is critical for shaping an effective digital culture and accelerating the cultural transformation of the organization. Leadership and the middle management are critical to translating the broader digital vision into tangible business outcomes and rewarding positive digital behaviors.

“To compete for the future, companies must invest in a digital culture that reaches everyone in the organization. Our research shows that culture is either the number one inhibitor or catalyst to digital transformation and innovation. However, many executives believe their culture is already digital, but when you ask employees, they will disagree. This gap signifies the lack of a digital vision, strategy and tactical execution plan from the top”, said Brian Solis. “Cultivating a digital culture is a way of business that understands how technology is changing behaviors, work and market dynamics. It helps all stakeholders grow to compete more effectively in an ever-shifting business climate."

Every two years, the World Health Organization (WHO) releases a medication list that it believes should be available, if needed, to all the people of the Earth. The latest iteration of the essential medicines list has just been released. It’s a compendium like the ones health insurers maintain to help them determine which medicines should be covered by their policies. Think of it like The Worlds Formulary.

That may sound dull or at least rather wonky. But there are real-world implications when a drug makes — or is not approved for — this list. The move to include HIV drugs in 2002 arguably helped to make lifesaving antiretrovirals available to AIDS patients in developing countries. More recently, the addition hepatitis C drugs to the list appears to have put them on a similar trajectory.

The list is meant to help countries figure out how to prioritize spending on medications. It’s a model that many use to craft their own drug formularies — while individual countries may make tweaks here and there, they don’t each need to set about inventing this wheel.



(TNS) - Ammo casings littered the entrance to the dimly lit gym basement of Fountain Middle school. Cole Davison, a junior at Fountain-Fort Carson, Colo., High School, sat on the stairs yelling at a man with a gun. The man fired a shot at Davison, hitting him in the foot. Police managed to drag the student to safety, then went back into the school. Two gunmen were inside and students were missing.

It's the nightmare no one wants to live, but it's a reality law enforcement officers, school officials and students have to be prepared to handle.

During a three-day training exercise this week, the Fountain Police Department and eight other agencies practiced responding to an active shooter and other emergencies. On Friday, the final day, police and other agencies dealt with a worst-case scenario: two armed intruders with hostages in the basement of the school's gym.



In Japan, disaster learning centers that allow visitors to experience simulated earthquakes, typhoons and fires are gaining five-star reviews on travel sites like TripAdvisor and providing valuable lessons in preparedness.

The Japan Times reports that earthquake simulators have become major tourist draws at more than 60 disaster education centers nationwide and are attracting growing numbers of foreign visitors.

Some attribute the increased interest in disaster prevention education in Japan to the 2011 Tohoku earthquake and tsunami. Others note that tourists today are more interested in life experiences than shopping.



While hackers and cyberterrorists often make headlines, there is a far more common cause for concern when it comes to safeguarding companies’ confidential data – their employees. Research from CEB, now Gartner, indicates nearly 60 percent of privacy failures result from an organization’s own employees and, worse, over half of employee-driven privacy failures result from intentional behavior.

To reduce the chances of employees creating privacy failures, most organizations first create training and communications focused on the importance of data privacy. But these efforts can prove ineffective, especially if they are created with a one-size-fits-all approach. While privacy awareness is certainly important for all employees, messages on how to reduce privacy risks designed for entry-level employees may not be as applicable to managers, or for that matter, senior executives.

To ensure a privacy program drives the right behaviors among all employees, leaders must tailor risk management strategies to the unique characteristics of different employee groups. But what risks do these groups pose?



Monday, 12 June 2017 16:49

Risk Profiles for Key Employee Groups

When WannaCry ransomware hit last month, it highlighted a very serious security problem, one that we just don’t talk about enough. That’s the use of outdated and unsupported operating systems and software.

Even before the massive ransomware attack, I knew how much of a hidden problem this was, mostly through anecdotal evidence. I’ve had informal conversations with people employed in varied industries, including those doing highly sensitive research, who have said they continued to use Windows XP because IT didn’t have the time or budget to upgrade to a newer OS or they just liked XP better than anything else and switched back. We’ve heard stories that Point of Sale systems and IoT still operate on XP because it would be too costly to switch.

BitSight has confirmed my anecdotal evidence. In a new report, “A Growing Risk Ignored: Critical Updates,” the company analyzed more than 35,000 companies from industries across the globe and found that a surprising number of companies continue to run outdated and unsupported operating systems, as well as internet browsers.



The Business Continuity Institute

Data is of incredible value to our organizations. The more we have, the more we can discover about our customers and our market. The more we know about these, the more we can fine tune our products and services to meet their precise needs. Of course that data doesn't just have a value to our own organizations, it also has a value to others, and that is why we need to make sure it is protected.

Over the last few years we've seen some big organizations receive fines of tens of millions of dollars as a result of a data breach, and we've also seen them suffer severe reputational losses. When the General Data Protection Regulations come into force next May, the potential for large fines will increase further for any organization that holds data on EU citizens.

It is important that organizations have processes in place to protect their data, and processes in place to be able to respond in the event of a breach. The BCI has now begun a new study, conducted in collaboration with Mimecast, that seeks to discover the attitudes, behaviours and business continuity arrangements in place related to information security.

Please do support this study by taking a few minutes to complete the survey, it should only take about ten minutes, and each respondent will be in with a chance of winning a £100 Amazon gift card.

The Business Continuity Institute

Like the terrorist attack in Manchester, the response by individuals to the London Bridge attack last Saturday made me proud to be British. The off-duty policeman rugby tackling one of the terrorists, the British Transport Police officer armed with just a baton fighting one of the knifemen and the people who threw bottles, chairs and tables to protect customers in one of the pubs nearby are all heroes who rose to the occasion.

One of the people interviewed about the incident was a gentleman from Royal United Services Institute, who made a comment about how during the incident he saw many people enact the government’s advice of ‘Run, Hide, Tell’. This got me thinking that the emergency services response to both recent attacks and the public’s use of ‘Run, Hide, Tell’ are very good examples of how exercising plans actually makes a difference.

I believe that a few weeks before the Manchester attack, the police had actually practised a very similar exercise to the incident they had to respond to. Last year, there was an exercise at The Trafford Centre, which involved 800 volunteers playing members of the public, in order to test the emergency response to a major terrorist incident. These extensive ‘live’ exercises require a lot of planning and are costly to run, but their worth was proved by the response to the Manchester bombing. In the media coverage of the incident, there was not one bit of criticism directed at the emergency services. This is very stark contrast to, although some time ago, the responses to Hillsborough and the Bradford fire.

In the same way, the ability of the police to respond to the attack in London and kill the terrorists within eight minutes, is again testament to the planning and professionalism of the police. I was training a Bank’s Country Crisis Management team this week and I used both examples as reasons why exercising plans is so important.

I think the public using ‘Run, Hide, Tell’ is important in three aspects. Firstly, it worked and helped to reduce the number of casualties. Secondly, I think as business continuity people we should be teaching this to our staff. A couple of years ago I thought that it would be alarmist and unlikely to be required, but, I think due to the threat level and the number of attacks, it is a useful drill to teach.

Thirdly, I think it illustrates the importance of embedding business continuity and shows that everyone needs to know what to do in an emergency. If employees hear about an incident which has affected your head office, instead of going home and waiting for instructions, they know what to do themselves. If they are a member of the crisis management team, they know to immediately go to the second team location or the work area recovery location if they have recovery roles. This will speed up the response, as lots of time and effort will not be spent telling staff what to do and where to go.

We as business continuity people don’t need to be convinced to exercise our plans, but often those who have roles in the team are reluctant!

P.S. Have a look at the citizenAID App, which has been produced with lots of useful information about how to respond to a terrorist attack.

Charlie Maclean-Bristol is a Fellow of the Business Continuity Institute, Director at PlanB Consulting and Director of Training at Business Continuity Training.

Governments often make legal requirements about things that could damage people’s health, whether in a physical, financial, or possibly other sense.

Motor vehicles must be insured. Underage drinking is forbidden. Enterprises are required to meet health and safety standards for employees and visitors.

Financial institutions must be certified and separate internal finances from customer accounts. With today’s dependency on IT and data, a case could be made for enforcing minimum levels of disaster recovery planning and management.

After all, a systems failure could force a company to shut down, possibly causing severe hardship.



Change Management is a hot topic lately on my social media channels. Like my friend Jon Hall, I also am a long time veteran of the classic Change Advisory Board (CAB) process. It almost seems medieval: a weekly or bi-weekly meeting of all-powerful IT leaders and senior engineers, holding court like royalty of old, hearing the supplications of the assembled peasants seeking various favors. I’ve heard the terms “security theater” and “governance theater” applied to unthinking and ritualistic practices in the GRC (governance, risk, and compliance) space. The CAB spectacle, at its worst, is just another form of IT theater, and it’s time to ring that curtain down.
As a process symbolizing traditional IT service management and the ITIL framework, it’s under increasing pressure to modernize in response to Agile and DevOps trends. However, change management emerged for a reason and I think it’s prudent to look at what, at its best, the practice actually does and why so many companies have used it for so long. 
This was the topic of my most recent research, “Change Management: Let’s Get Back to Basics.” In that report, I cover the fundamental reasons for the Change process. It has legitimate objectives — coordination, risk reduction, audit trail — that do not go away because of Agile or DevOps. The question is rather, how does the modern, customer-led, digital organization achieve them? The classic “issue a request and appear before a bi-weekly CAB” is one way to achieve the desired outcomes — and likely not the most effective means, as I discuss.

In the past, security breaches were viewed as a single event occurring at a certain point in time. However, this is no longer the case. Security threats now rarely occur as singular events, and a new kind of attack is on the rise: Advanced Persistent Threats (APTs). An APT is a network attack in which an unauthorized person or device gains access to a network and, instead of immediately stealing data or damaging infrastructure, stays there for a long period of time, remaining undetected. It could even occur from a device or person with proper security clearance, thus appearing as normal activity. It is much harder to detect these attacks as they are typically small in scope and focus on very specific targets (usually in nontechnical departments where security threats are less likely to be noticed or reported), and occur over a period of weeks or even months.

In 2014, RSA, a cybersecurity company was called into the U.S. government’s Office of Personnel Management to fix a low-level problem. Upon arrival, RSA discovered that there were intruders in the company’s network, and they had been there for over 6 months routinely stealing data in an organized yet inconspicuous manner. If not for the coincidental security check from RSA, the organization would have never noticed the breach. Ironically, the door into the system was unwittingly opened by an employee who accidentally downloaded malware from a spearphishing attack, much like the google docs cyber attack that took place in May. The employee was quickly informed and asked to change his password: he and his company thought the breach ended there, but it continued for months undetected.



Severe weather across the United States in May resulted in combined public and private insured losses of at least $3 billion.

Aon Benfield’s latest Global Catastrophe Recap report reveals that central and eastern parts of the U.S. saw extensive damage from large hail, straight-line winds, tornadoes and isolated flash flooding during last month’s storms.

The most prolific event? A May 8 major storm in the greater Denver, Colorado metro region, where damage from softball-sized hail resulted in an insured loss of more than $1.4 billion in the state alone.



Automation gets a bad rep these days, what with public fear that robots will take over jobs (an invalid assumption – we will be working side by side with them).

However, if you asked the most diehard Luddites if they were ready willing to give up the following:

  • Depositing a check using a mobile app
  • Ordering products on Amazon to receive the next day
  • Accepting a jury duty request online

...they would probably hesitate.



The Business Continuity Institute

New and evolving threats combined with persistent resource challenges limit organizations’ abilities to defend against cyber intrusions, and 80% of security leaders now believe it is likely their enterprise will experience a cyber attack this year. Despite this, many organizations are struggling to keep pace with the threat environment.

ISACA's State of Cyber Security Study found that more than half (53%) of survey respondents reported a year on year increase in cyber attacks for 2016, representing a combination of changing threat entry points and types of threats. IoT overtook mobile as primary focus for cyber defenses as 97% of organizations see a rise in its usage. As IoT becomes more prevalent in organizations, cyber security professionals need to ensure protocols are in place to safeguard new threat entry points.

62% reported experiencing ransomware in 2016, but only 53% have a formal process in place to address it - a concerning number given the significant international impact of the recent WannaCry ransomware attack. Malicious attacks that can impair an organization’s operations or user data remain high in general (78% of organizations reporting attacks).

Additionally, fewer than a third of organizations (31%) say they routinely test their security controls, and 13% never test them. 16% do not have an incident response plan.

“There is a significant and concerning gap between the threats an organization faces and its readiness to address those threats in a timely or effective manner,” said Christos Dimitriadis, board chair and group head of information security at INTRALOT. “Cyber security professionals face huge demands to secure organizational infrastructure, and teams need to be properly trained, resourced and prepared.”

The Business Continuity Institute

Two-thirds of UK businesses believe their organization to be highly protected from attempts by outsiders to gain access to their systems and data, and a similar proportion maintain they have the right processes in place to adequately react to privacy and security threats.

The Willis Towers Watson Cyber Pulse Survey also found that the disparity between corporate feelings of preparedness and the increasing number of cyber security incidents could be a result of lack of responsibility or accountability among employees, the human element of the cyber equation. UK employees ranked ‘insufficient understanding’ (61%) as the biggest barrier to their organization effectively managing its cyber risk. Nearly half (46%) spent 30 minutes or less on cyber security training in 2016, and over a quarter (27%) received none at all.

More concerning for employers is the discovery that, of the employees who did complete cyber training, nearly two-thirds (62%) admitted they “only completed the training because it was required”, and nearly half (44%) believe that opening any email on their work computer is safe. This suggests that the employees may not be engaged or feel the personal accountability necessary to drive long-term, sustainable behaviours.

Anthony Dagostino, Head of Global Cyber Risk, Willis Towers Watson, said: “As the world has seen with the proliferation of phishing scams, most recently highlighted by the global WannaCry ransomware attack, the opening of just one suspicious email containing a harmful link or attachment can lead to a company-wide event. However there appears to be a disconnect between executive priorities around data protection and the need to invest in a cyber-savvy workforce through training, incentives and talent management strategies.”

The survey also detailed additional barriers that companies feel impact their cyber preparedness and the degree to which corporations are providing cyber training to their employers. Nearly a third (30%) of employees surveyed have logged into their work-designated computer or mobile device over an unsecured public network (such as public Wi-Fi). Only 1 in 4 (40%) of the employers surveyed felt that they had made progress addressing cyber security factors tied to human error and behaviours in the last three years

It is issues such as these that were raised in the Business Continuity Institute's cyber security report, published during Business Continuity Awareness Week, and highlighted several areas in which users can leave their organizations vulnerable to a cyber attack.

“Hackers are exploiting the fact that while corporations are building walls of technology around their organizations and their networks, by far the biggest threat to corporate digital security and privacy continues to come from the employees within, often completely by accident,” said Dagostino. “A truly holistic cyber risk management strategy requires at its core a cyber-savvy workforce, however organizations first have to know where the vulnerabilities are in order to plug the gaps. Many organizations are facing talent deficiencies and skills shortages in their IT departments, which in turn are creating significant loopholes in their overall security measures.”

Three items – two surveys and a government study – that were released in recent weeks show just how serious the Internet of Things (IoT) security situation is.

Altman Vilandrie & Company found that 48 percent of responding organizations’ IoT networks have been breached, some more than once.

The survey, which included results from almost 400 firms, also touched on budgetary issues. IoT security breaches are equal to 13.4 percent of annual total revenue of firms that take in less than $5 million. Almost half of larger companies with annual revenues of more than $2 billion estimated just one breach could cost more than $20 million, according to the press release.



Thursday, 08 June 2017 14:20

IoT Security: Even Worse Than You Think

(TNS) - Disaster can strike at any time, which means the Georgia Emergency Management Agency, Homeland Security, is open every minute of every hour of every day.

In a bunker in the GEMA headquarters’ basement on the east side of Atlanta, people are always ready to answer the call when calamity strikes.

From a terror attack, to hurricanes, wildfires or avian flu outbreaks, GEMA has a hierarchy of emergency specialists ready to mobilize.

Wednesday morning, Macon-Bibb County’s emergency planners visited the State Operations Center for their regular Emergency Support Functions meeting.



The cloud is quickly expanding from a general-purpose data services platform to a series of highly targeted industry vertical solutions, further decreasing the need for businesses of all types to maintain infrastructure to support their operational models.

This is proving to be a crucial niche for smaller cloud players, and even software developers incorporating cloud services into their platforms, as they attempt to carve market share from hyperscale providers likes Google and Amazon.

Earlier this year, ZDNet’s Manek Dubash highlighted the steady shift toward vertical clouds, noting that while initial cloud deployments were earmarked for bulk storage and generic processing, organizations are now looking for the same customized environments in the cloud that they have built up in the local data center. Initially, of course, vertical clouds gravitated toward leading industries like health care and finance, but this is changing as digitalization puts pressure on all sectors of the economy to streamline infrastructure and augment their product lines with digital services.



SEATTLE – A year following one of the nation’s largest domestic drills, lessons-learned continue to guide strategies that improve the Pacific Northwest’s ability to survive and recover from a catastrophic Cascadia Subduction Zone (CSZ) earthquake and tsunami.

On June 7, 2016, more than 20,000 emergency managers in Idaho, Oregon and Washington kicked off Cascadia Rising 2016, a four-day, large scale exercise to test response and recovery capabilities in the wake of a 9.0 magnitude CSZ earthquake and tsunami. The exercise involved local, state, tribal and federal partners, along with military commands, private sector and non-governmental organizations.

Lessons learned from Cascadia Rising 2016

"I'm pleased the momentum from Cascadia Rising continues to gain speed," said Maj. Gen. Bret Daugherty, director of the Washington Military Department and commander of the Washington National Guard. "As a result of the exercise, our governor directed the formation of a Resilient Washington sub-cabinet, a multi-agency workgroup charged with improving our state's resiliency. Cascadia Rising also guided our decision to change our recommendation on preparedness, so we're now telling people to have enough emergency supplies to stay on their own for up to two weeks."

“Cascadia Rising was the largest exercise the State of Oregon has ever conducted. The complexity of the four-day exercise provided an unprecedented opportunity to examine and assess response and emergency management practices, and identify areas where we excel and where we can improve,” said Oregon Office of Emergency Management Director Andrew Phelps. “The collaboration among all levels of government, and with our private sector partners leading up to and during the exercise, was outstanding. I believe these relationships were strengthened through this experience and will continue to grow as we work toward enhancing our preparedness posture.”

“In addition, Cascadia Rising served as a reminder to all Oregonians that individual and family emergency preparedness is key to an effective response to an earthquake or any disaster and begin the recovery process,” said Phelps. “As we constantly improve our capabilities, we ask all to be prepared for at least two weeks.”

Idaho’s participation helped raise awareness that the residual effects of an earthquake and tsunami along the coast would be felt in Idaho. That includes the possible need to accommodate tens of thousands of evacuees and displaced persons who were directly impacted.

“The countless strong partnerships we cultivated in the years leading up to the exercise proved invaluable to the success of Cascadia Rising in Idaho,” said Gen Brad Richy, of the Idaho Office of Emergency Management. “The collaboration with FEMA Region 10, and our Idaho counties, is proving indispensable as Idaho currently manages one of the most challenging flood seasons on record. Thirty-one of Idaho’s 44 counties have disaster declarations in place right now. When people ask about the importance of exercises, I like to point out that lessons learned during Cascadia Rising 2016 have improved our swift and effective response to the 2017 flooding disasters.”

“The Cascadia Rising 2016 exercise highlighted a number of critical areas that we, the emergency management community, should improve before this fault ruptures, which will impact large portions of our residents and infrastructure. It is exercises like this, that foster coordination and help build relationships before a real-world event occurs,” said Sharon Loper, Acting FEMA Region 10 Administrator. “The exercise highlighted a number of infrastructure interdependencies our residents have come to rely on, such as electricity, communications, fuel, water and our roads.  Most of these sectors would be heavily disrupted after a CSZ event and plans are being developed and exercised that focus on the efficient recovery of these essential services.  In this past year, FEMA Region 10 has made improvements in coordinating disaster logistics, family reunification strategies and mass power outage scenarios with our partners.” 

“Every exercise teaches us something and improves our response,” said Loper. “I’m pleased so many partners and community members collaborate on these important issues. We should continue to work together so that we are all better prepared to protect lives and property.” 


Lying mostly offshore, the plate interface that is the Cascadia Subduction Zone is a giant fault approximately 700 miles long. At this location, the set of tectonic plates to the west is sliding (subducting) beneath the North American plate. Friction prevents movement of these two plates; ultimately, these plates are stuck. The stress of these boundaries is continuously building until the fault suddenly breaks, resulting in a potentially devastating 700-mile earthquake and ensuing tsunami along the California, Oregon and Washington coastlines. Last year’s Cascadia Rising 2016 exercise was to test plans and procedures through a 9.0M earthquake and follow-on tsunami with expectations to improve catastrophic disaster operational readiness across the whole community.

The Cascadia Subduction Zone off the coast of North America spans from northern California to southern British Columbia.
The Cascadia Subduction Zone off the coast of North America spans from northern California to southern British Columbia. This subduction zone can produce earthquakes as large as magnitude 9 and corresponding tsunamis. Download Original

 Cascadia Rising 2016 was a four-day exercise focused on interagency and multi-state coordination following a 9.0M Cascadia Subductions Zone earthquake and follow-on tsunami. Emergency management centers at local, state, tribal and federal levels in coordination with military commands, private sector and non-governmental organizations in Washington, Oregon, and Idaho, activated to coordinate simulated field response operations.

Related Links
The Business Continuity Institute

When we're developing our business continuity programmes, do we consider political rows a threat to our organization? Do we consider whether a dispute between countries could filter down and affect us? Political tensions certainly exist worldwide, you only have to look at the relationship between the US and Mexico to understand this, or the tensions that are growing between the UK and the EU as Brexit looms closer.

Qatar has now found itself at the centre of such an issue as many of its neighbouring Gulf States are cutting diplomatic ties and closing borders. Saudi Arabia, Bahrain, Egypt, the UAE and Yemen have all turned their backs on Qatar, leaving it in isolation, and Qatari citizens in those countries have been given two weeks to leave.

Qatar is the world's largest suppliers of liquefied natural gas, although exports don't seem to be affected so far. The problem is imports, which Qatar relies on for 80% of its food. With its only land border closed on Saudi's insistence, a backlog of lorries has now been held up and is waiting to be re-routed.

The problem is further exacerbated as many containers destined for Qatar arrive via Dubai where they are transferred to smaller vessels to complete the journey. With a ban all vessels travelling to, or arriving from, Qatar, this is no longer an option.

Added to this, many infrastructure and construction projects in Qatar use consultants from elsewhere in the Middle East, and those consultants are now unable to travel to Qatar to support these projects, meaning delays are inevitable.

Clearly the cause of the diplomatic tension is important, but organizations must think beyond this and consider how it will have an impact on them and their supply chains. While it is unknown how long this situation will last, a similar incident occurred in 2014 and went on for nine months, so it may not end any time soon.

Most people think of Mail-Gard in terms of disaster recovery support. It’s what we’re known for, and intuitively, people understand that during a flood, tornado, or major power outage, a backup partner is necessary to ensure important documents are still delivered to customers to keep your business running without interruption.

But bad weather and Acts of God aren’t always to blame for the times when a company’s in-house print and mail operations may be swamped. Seasonal volume swings or equipment upgrades can leave in-house operations overwhelmed, and they simply can’t keep up. That’s when Mail-Gard’s print outsourcing comes in, and it’s another area where we can be there for you in a time of need.



If you’ve read through our recent post on ISO Business Continuity Standard 22301, you know the components involved in building a high-performing program. Still, it can be a daunting task to meet this complex standard; how can you be sure you have all the angles covered? Where should you even start?

To ensure consistency and completeness as you develop your program, we’ve designed an ISO 22301 checklist. If you can verify that your program has each of the following elements associated with Sections 5-10 of the standard, your company does indeed have the organized and thorough continuity program outlined in ISO 22301. You can also use it as an ISO 22301 audit checklist if your company is preparing to undergo an official certification process. *The starred items are where most companies fall short, in our experience, so pay special attention to your efforts in those areas.

A crucial part of meeting business continuity standards like ISO 22301 is a well-written business recovery plan. Find out the components of a successful plan and get sample checklists in this free guide.



The Business Continuity Institute

By secretlondon123 (Flickr: polling station) [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

It is election day in the UK tomorrow, and a chance to vote for who we want to represent us in Parliament, and ultimately who we want to lead our country into the future. A month ago we could probably have said with some degree certainty who the winner would be and by how much, but now that certainty is gone. The gap has narrowed and we’re not entirely sure what direction the country will take from Friday morning onwards.

There could still be a majority Government for the Conservative Party, or they could lose that majority and seek to form a Coalition Government, or even try to make it work on their own. It is not completely out of the question that the Labour Party could win.

For many countries over the last few decades, the election of a new government has arguably resulted in very little noticeable change. The policies of our leading parties may vary slightly, but it hasn’t usually made a substantial difference to us or our organizations.

Politics is changing though, and the leading parties all over the world are moving further apart on the political spectrum. You only have to look to the US and French Presidential Elections to observe the deep divides that are appearing between political parties and across the population.

The UK is no different. Given the current split between the Conservative and Labour Parties, the outcome could determine whether we will have more years of austerity and the privatisation of public services, or whether we will have increased public spending and nationalisation. It could determine whether we have a soft Brexit, a hard Brexit, or perhaps even no Brexit at all. The impact will not just be felt in the UK, but all across the European Union and perhaps even further afield.

It is this uncertainty that puts business continuity professionals in their element – being able to analyse what the possible outcomes could be, what impact they could have on the organization and what mechanisms could be put in place to prevent them from becoming an issue.

Whenever an election does occur, whether it is in your organization’s home country, or one it does business with, business continuity professionals should be studying the manifestos of the major parties to consider how much of an impact the different policies could have on their organization.

Will there be more or less regulation? Will there be more or less public spending? Will there be more or less interference from Central Government? Whatever the answer to these questions, our organizations will have to consider the appropriate responses. These considerations should also go beyond the direct impact of the policies, and also consider the unintended consequences, for example, will certain policies result in increased protester activity that could lead to disruption.

We cannot predict the future, and in countries where there are free and democratic elections, there is no way of knowing for certain what the outcome of those elections will be, but we can prepare for it. We can ensure that our organizations are more resilient to the changes that may come about as a result.

If our organizations are to achieve continued growth, then they must be adaptable to change, wherever that change may come from. They must be able to overcome uncertainty, wherever that uncertainty lies. And they must be prepared for the future, whatever it holds.

David Thorp
Executive Director of the Business Continuity Institute

One of the biggest challenges with shifting applications from an on-premises environment into a public cloud is the sheer volume of data that often needs to be moved. The amount of time and effort involved in a cloud migration for many IT organization has been nothing less than daunting.

Veritas Technologies today announced it has significantly simplified those data migration issues with the launch of Veritas CloudMobility, which allows IT organizations to employ software that Veritas originally developed to back up applications to a cloud migration. Alex Sakaguchi, director of global solutions marketing for Veritas, says the difference now is that data migration into the cloud can be executed via a single mouse click.

“It’s based on the same technology we use for disaster recovery,” says Sakaguchi.

Veritas today also announced Veritas CloudPoint, which enables IT organizations to much more aggressively schedule the capturing of snapshots of data residing in multiple public clouds as part of an effort to accelerate recovery time and point objectives.



Ransomware defense is often an uncomfortable subject where enterprises must face some hard truths and new responsibilities. Nevertheless, it’s becoming increasingly necessary. 

According to the FBI, there were an average of 4,000 ransomware attacks per day in 2016. This represents a 300% increase from 2015. Unfortunately, when we consider data breaches, we are usually talking about how organizations are prepared and will act during a breach, not whether a breach will occur. We see and hear about ransomware at an increased rate, most recently the Wannacry attack. Wannacry infected hundreds of thousands of computers in over 150 countries, from individuals to large organizations.



This is part 6 of a multi-part series on the Analytics Operating Model.

In our recent blog, Data Oriented Architecture: Laying the Right Foundation, we examined the challenges of selecting an enterprise platform for big data and approaches for overcoming those challenges. As we continue this series, we dive deeper to explore each element of a sound analytics capability in an era where data is plentiful and computing is powerful, but results have not yet lived up to the hype.

We live and work in an age where every device and sensor can generate and transmit data. Companies must develop capabilities that allow them to identify and manage the right data resources. A key driver of success is an organization’s ability to manage data in a way that maximizes value through analytics data management practices. High quality, accurate data is crucial to a successful algorithm and ensures that data-driven decisions are based on facts. Further, companies must also consider how this data fits into the broader organizational value chain in which they are operating to fully extract and understand the value stored in their data.

The ability to successfully deliver analytics solutions is dependent on an organization’s ability to develop an understanding of how different data sets contextually fit into the organization by managing data as an asset and establishing an enterprise data model.



Before my last deployment (quite a while ago, thankfully) my unit was training on a variety of tactics to make us all more effective in an operational setting.  That’s the long way of saying we were all getting PT'd repeatedly and learning how terrible we were at stopping the bad guys, luckily we all got better as time went on.  Anyway... 

One of the most valuable lessons we learned from working with the guys in some of the more “special” operational roles was that things shouldn’t be fair. 

In other words, the bad guys didn’t play fair…Why should we?



Wednesday, 07 June 2017 15:05

For More Cyber Operations Wins, Cheat…

Hyperconverged infrastructure (HCI) is typically thought of as a web-scale data center solution. But it turns out that the technology’s space and management advantages are making it equally compelling for small and medium-sized businesses (SMBs).

Everyone has a vested interest in streamlining their data infrastructure, but for SMBs, the need is even greater because they are facing the same Big Data explosion, relative to their size, as larger organizations but lack the finances and manpower to build a traditional enterprise environment. There is always the cloud, of course, but costs tend to scale with both data loads and the level of service required, and performance tends to be lacking compared to state-of-the-art, on-premises resources.

Mike Grisamore, vice president of small business sales at CDW, argues in Biz Tech Magazine that with entry-level HCI systems now starting at $25,000, small organizations have an opportunity to kick their IT infrastructure into the future without blowing their typically tight budgets. The two key use cases for HCI are small organizations that are due for a hardware refresh and those that are launching specialized projects. And the best part is that with a modular infrastructure, organizations can start small and easily add modules as requirements scale, usually with limited in-house technical staff, or none at all.



Wednesday, 07 June 2017 14:59

SMBs Warming Up to Hyperconvergence

The Business Continuity Institute

The Conservative, Labour and Liberal Democrats’ manifestos all ignore the true potential impact of Brexit, a new report by academics from The UK in a Changing Europe shows. Brexit may impose significant economic costs, at least in the short-term, yet all the parties make pledges as if the post-Brexit world will be a case of business as usual.

The report, Red, Yellow and Blue Brexit: the manifestos uncovered, highlights the challenge Brexit represents for the British state. The civil service will need,among other things, to coordinate the negotiations, draft the Great Repeal Bill and prepare primary legislation while the necessary administrative and regulatory structures will need to be put in place before the UK leaves the EU. Yet the manifestos, with their ambitious policy pledges, fail to take account of the constraints this process will place on administrative resources.

Professor Anand Menon, The UK in a Changing Europe director, said: “The majority of the next parliament will be a post-Brexit one. It will have to deal with the implications of one of the most important and difficult decisions that Britain has ever taken. What a shame the parties did not factor this into their plans.”

The Conservative policy to reduce net immigration to the tens of thousands is also likely to have severe economic consequences. The party does not quantify the consequences of its immigration policies. However, the Office for Budget Responsibility has estimated the fiscal impact of a reduction of net migration from 265,000 to 185,000 at about £6bn a year by 2021.

Labour want to maintain membership of the single market but will end freedom of movement when the UK leaves the EU. Their position is thus fatally flawed as the EU will demand acceptance of all four freedoms in return for membership of the single market.

Labour states it would immediately guarantee the rights of EU nationals in the UK and ‘secure reciprocal rights’ for UK nationals in the EU. However, the EU has made clear there will be no final agreement on any one area until there is agreement in all areas. Also, there is no detail on how the huge administrative challenges will be met.

In foreign and security policy, the report notes that there is strikingly little of substance in any of the manifestos as to how Brexit might impact on Britain’s international role. Nowhere are strategic priorities laid out.

None of the manifestos comprehensively addresses what will happen in relation to EU public health law and policy. Nor are the profound challenges Brexit poses to the devolution settlement grappled with.

There is no mention of the jurisdiction of the European Court of Justice in the Conservative manifesto – previously a clear ‘red line’. Yet there is absolutely nothing in the Tory manifesto to reassure key sectors like pharmaceuticals, financial services, and the automotive industry, whose regulatory position, access to markets, or supply chains are threatened by Brexit.

Labour provides no detail as to why ‘no deal’ is the worst possible option for Britain, rejecting it as a viable alternative but failing to make clear how the EU27 could be made to agree to this.

A Liberal Democrat government would be caught between negotiating a very close relationship with the EU and arguing such a relationship would not be preferable to remaining.

Professor Menon said: “What is striking, is that while all three parties view Brexit as a major event, the manifestos treat it largely in isolation from other aspects of policy, rather than the defining issue of the next parliament.”

Wednesday, 07 June 2017 14:57

BCI: Manifestos Hide Truth About Brexit

The Business Continuity Institute

4 in 10 organizations believe that C-level executives, including the CEO, are most at risk of being hacked when working outside of the office, according to a new study by iPass. Cafes and coffee shops were ranked the number one high-risk venue by 42% of respondents, from a list also including airports (30%), hotels (16%), exhibition centres (7%) and airplanes (4%).

Compiling the responses of 500 organizations from the US, UK, Germany and France, the annual iPass Mobile Security Report provides an overview of how companies are dealing with the trade-off between security and the need to enable a mobile workforce. Indeed, the vast majority (93%) of respondents said they were concerned about the security challenges posed by a growing mobile workforce. Almost half (47%) said they were ‘very’ concerned, up from 36% in 2016. Furthermore, more than two thirds of organizations (68%) have chosen to ban employee use of free public Wi-Fi hotspots to some degree (compared to 62% in 2016), while 33% of organizations ban employee use at all times, up from 22% in 2016.

“The grim reality is that C-level executives are by far at the greatest risk of being hacked outside of the office. They are not your typical 9-5 office worker. They often work long hours, are rarely confined to the office, and have unrestricted access to the most sensitive company data imaginable. They represent a dangerous combination of being both highly valuable and highly available, therefore a prime target for any hacker,” said Raghu Konka, vice president of engineering at iPass. “Cafes and coffee shops are everywhere and offer both convenience and comfort for mobile workers, who flock to these venues for the free high speed internet as much as for the the coffee. However, cafes invariably have lax security standards, meaning that anyone using these networks will be potentially vulnerable.”

Man-in-the-middle attacks, whereby an attacker can secretly relay and even alter communications without the mobile user knowing, were identified by 69% of organizations as being of concern when their employees use public Wi-Fi. However, more than half of respondents also chose a lack of encryption (63%), unpatched operating systems (55%), and hotspot spoofing (58%) as chief concerns.

The dangers that using public Wi-Fi creates was an issue raised in the Business Continuity Institute's cyber security report, published during Business Continuity Awareness Week, which also highlighted several other areas in which users can leave their organizations vulnerable to a cyber attack.

Some of the other findings of the iPass report and regional trends include:

  • The US (98%) is most concerned by the increasing number of mobile security challenges – compared to France (88%), Germany (89%) and the UK (92%)
  • Nearly one in ten UK organizations (8%) said that they have no security concerns when employees use public Wi-Fi hotspots. In contrast, this figure is one percent in the US and Germany, and 2% in France
  • Similarly, UK organizations are the least likely to ban the use of public Wi-Fi. 44% said that they have no plans to do so, as opposed to 8% in Germany, 10% in the U.S. and 15% in France
  • Worldwide, 75% of enterprises still allow or encourage the use of MiFi devices. In France, however, 29% of businesses have banned them due to security concerns

“Organizations are more aware of the mobile security threat than ever, but they still struggle to find the balance between security and productivity,” continued Konka. “While businesses understand that free public Wi-Fi hotspots can empower employees to do their job and be more productive, they are also fearful of the potential security threat. Man-in-the-middle attacks were identified as the primary threat, but the entire mobile attack surface is getting larger. Organizations must recognize this fact and do their best to ensure that their mobile workers are securely connected.”

“Sadly, in response to this growing threat, the majority of organizations are choosing to ban first and think later. They ignore the fact that, in an increasingly mobile world, there are actually far more opportunities than threats. Rather than give in to security threats and enforce bans that can be detrimental or even unenforceable, businesses must instead ensure that their mobile workers have the tools to get online and work securely at all times.”

The cliché of “change is the only constant” is true for most enterprises. Customers, business analysts, and employees all expect some sort of evolution, even if it is with varying degrees of enthusiasm.

Even the minority whose positioning is deliberately one of no change (providers of traditional goods and services) are affected by changes in the way governments tax and regulate them, or how suppliers supply to them.

When it comes to business continuity, plans and management must keep pace with business changes too. But is an annual, a quarterly, or even a monthly BCP review the right way to stay synchronized?

There is a dilemma with business continuity planning and reviews. Making them too infrequent means greater risk of being out of alignment and less able to react effectively to threats of business interruption.



The data center is on a clear trajectory toward greater abstraction, greater resource distribution, and greater diversity in both the workloads it supports and the technologies it brings to bear.

All of this leads to an increasingly complex management challenge that pits the need for greater autonomy among users and applications against the needs of the enterprise to maintain data availability and security while keeping budgets under control.

According to Shay Demmons, executive VP of BaseLayer’s RunSmart software division, this challenge is compounded by the fact that most organizations are branching into new IoT and service-level data architectures that must reach back to legacy infrastructure for crucial data support. This calls for a “looking forward, looking backward” management approach that, in fact, utilizes many of the same technologies that are driving the transition to digital services – things like sensor-driven data systems, advanced visibility and intelligent automation that propel workflow management and resource allocation to the speed of modern business.



With so many files in existence and so many more being created every moment, it’s no wonder so many breaches and data loss incidents occur. We asked the experts for some of the top tips on keeping storage data protected.

1. Limit and Monitor Access

Many of the big data breaches we read about in the news trace their origins back to one of these two issues, and most likely both: too much access and little or no monitoring of that access. These are some of the biggest problems in data security, according to Rob Sobers, Director at Varonis.

The 2017 Varonis Data Risk Report, found that 20 percent of folders are open to every employee. Forty-seven percent of organizations in the report had at least 1,000 or more sensitive files containing personal data, health records, financial information or intellectual property open to every single user. Not only are sensitive files open to more people than necessary, but access abuse is not monitored and flagged. This is why 63 percent of data breaches take months or years to detect, according to the report.



Tuesday, 06 June 2017 15:14

8 Vital Data Protection Tips

(TNS) - With an above average hurricane season predicted, the lack of leadership at two agencies responsible for protecting the United States' coast lines should be a sobering thought, said a widely admired general who led the military’s response to Hurricane Katrina.

The National Oceanic and Atmospheric Administration, which runs the National Hurricane Center, and the Federal Emergency Management Agency are both without leaders. Those positions must be appointed by President Donald Trump and confirmed by the U.S. Senate, CNN reported.

“That should scare the hell out of everybody,” retired Lt. Gen. Russel Honoré told CNN. “These positions help save lives.”

Honoré, who served as commander of Joint Task Force Katrina and coordinated military relief efforts, told CNN that the disaster proved “how important leadership was.”



Many critical industries such as nuclear energy, commercial and military airlines—even drivers’ education—invest significant time and resources to developing processes. The data center industry … not so much.

That can be problematic, considering that two-thirds of data center outages are related to processes, not infrastructure systems, says David Boston, director of facility operations solutions for TiePoint-bkm Engineering.

“Most are quite aware that processes cause most of the downtime, but few have taken the initiative to comprehensively address them. This is somewhat unique to our industry.”



Nearly 6.9 million homes along the Gulf and Atlantic coasts have the potential for storm surge damage with a total estimated reconstruction cost value (RCV) of more than $1.5 trillion.

But it’s the location of future storms that will be integral to understanding the potential for catastrophic damage, according to CoreLogic’s 2017 storm surge analysis.

That’s because some 67.3 percent of the 6.9 million at-risk homes and 68.6 percent of the more than $1.5 trillion total RCV is located within 15 major metropolitan areas.



The Business Continuity Institute

Recently, the National Crime Agency (NCA) and National Cyber Security Centre (NCSC) launched its first joint report into ‘The cyber threat to UK businesses’. Outlined are the key trends that are expected to be seen across the cyber security industry over the coming months. Ransomware, which has experienced rapid growth over the last year and presents a hugely lucrative industry for cyber criminals, was recognised as an escalating threat to UK businesses.

Creating and deploying ransomware has never been easier. Malicious code needed to create the ransomware can now be readily outsourced, with 'Ransomware as a Service' models already available on the dark web, where wannabe attackers can purchase ready-made malware packages. This ease of procurement, teamed with the financial opportunity associated with targeted attacks, means ransomware will continue to be a huge threat in 2017.

The targets?

This increased accessibility has significantly broadened the variety of potential attackers in recent years, and as such it’s hard to generalise around the motivations of individuals. Whether it's lone actors operating from a bedroom, a politically-motivated hacktivist, or an international criminal organisation with salaried employees, everyone is a target to someone.


Individual consumers and smaller organisations represent low value targets. At this end of the spectrum, ransomware is a numbers game, and attackers tend to follow the path of least resistance. In practice, that means working through organisations that meet certain basic criteria (e.g. charities in London, with <£5m turnover), or individuals that represent demographics with little to no education in cyber security.


Larger organisations with valuable datasets and a public reputation to protect obviously represent high-value targets, and often attract the most sophisticated attacks as a result. One of the key dictators of severity is the level of access privileges held by the infected user. This makes power users such as sysadmins and senior executives far more valuable targets than ordinary users. Attackers can spend weeks or even months probing attack vectors in order to locate senior individuals susceptible to compromise.

The recipe for a successful attack

Whoever the target is, the rise of cryptocurrencies has increased the degree of anonymity afforded to criminals taking ransom payments. Cyber criminals balance risk and reward. Taking payments as cryptocurrency means the reward has stayed constant, whilst the risk of being caught has dropped significantly.

Although the government’s report advised UK organisations to combat cyber attacks by reporting attacks, promoting awareness and adopting cyber security programmes, it failed to acknowledge the more immediately actionable role that good business continuity practices can play in surviving and recovering from cyber attacks. Whilst outright prevention of a ransomware attack may be impossible, good continuity practices, such as a carefully tailored backup solution, can effectively negate the consequences.

What continuity practices can organisations implement to ensure they recover as quickly as possible?

Planning your defence

Something that was omitted from the government’s advice report is the importance of having an effective incident response plan in place. We typically advise that companies should plan for impacts and test for scenarios. Impact-based planning works on the basis that while there are an infinite number of possible disasters, the number of potential consequences at the operational level is much smaller. Scenario-based planning asks users to anticipate the consequences of a disastrous event and to create solutions ahead of time.

However, certain threats do warrant specific response plans, and this is certainly the case for ransomware. Ransomware can lie dormant on servers for a period of time to deliberately out-last a back-up strategy. As a result, it needs a different approach and plan to recover effectively.

Testing, testing, testing

Once this plan has been established, it is vital to then test that plan and make sure it works. Where this isn’t possible, organisations should run exercises such as a tabletop test as a minimum. This involves organisations responding to a simulated disruption by walking through their recovery plans and outlining their responses and actions.

Plans should be regularly reviewed, updated and tested. This ensures that in the event of an incident, plans can be executed as effectively as possible with minimum impact to everyone concerned. It would be advisable for UK organisations to make a ransomware attack the next focus of any future continuity planning if they have not done so already.

Road to recovery

In a ransomware attack a business will have two choices: recover the information from a previous back-up or pay the ransom. In many cases, even when a ransom has been paid, the data has not been released, so paying does not guarantee you will get your data back.

There are two main objectives when recovering from ransomware. To minimise the amount of data loss and to limit the amount of IT downtime for the business. The fastest way to recover from most incidents is to fail-over to replica systems hosted elsewhere. But these traditional disaster recovery services are not optimised for cyber threats. Replication software will immediately copy the ransomware from production IT systems to the offsite replica. This software will often have a limited number of historic versions to recover from so by the time an infection has been identified, the window for recovery has gone. This means that ransomware recovery can be incredibly time consuming and requires reverting to backups. This often involves trawling through historic versions of backups to locate the clean data.

The rise of ransomware will only increase so organisations must regard infection as a matter of ‘when’ rather than ‘if’ and take the appropriate steps to mitigate the risks. The advice from the government provides a solid foundation but it is imperative organisations have an effective response plan and backup strategy to support it.

Peter Groucutt, managing director at Databarracks

Tuesday, 06 June 2017 14:01

BCI: How to Survive a Ransomware Attack

The Business Continuity Institute

84% of UK small business owners and 43% of senior executives of large companies are unaware of the forthcoming General Data Protection Regulation, despite there now being less than a year to go until the law comes into force, a law that is designed to bring greater strength and consistency to the data protection given to individuals within the European Union.

Shred-It's seventh annual Security Tracker survey also found that only 14% of small business owners and 31% of senior executives were able to correctly identify the fine associated with the new regulation – up to €20 million or 4% of global turnover. This is despite a large proportion of senior executives (95%) and small business owners (87%) claiming to have at least some understanding of their industry’s legal requirements.

Businesses which are unaware of the forthcoming legislation and its implications are not only putting themselves at risk of severe financial penalties, but also the reputational damage caused by adverse publicity associated with falling foul of the law. This can often have a greater impact than the fine itself. Research shows that 64% of executives agree that their organization’s privacy and data protection practices contribute to reputation and brand image.

Data breaches are already the second greatest cause of concern for business continuity professionals, according to the Business Continuity Institute's latest Horizon Scan Report, and once this legislation comes into force, bringing with it higher penalties than already exist, this level of concern is only likely to increase. Organizations need to make sure they are aware of the requirements of the GDPR, and ensure that their data protection processes are robust enough to meet these requirements.

Of those respondents who claim to be aware of the legislation change, only 40% of senior executives have already begun to take action in preparation for the GDPR, in spite of 60% agreeing that the change in legislation would put pressure on their organization to change its policies related to information security.

The survey also highlights that companies feel the UK Government needs to take more action. 41% of small business owners (an 8% increase from 2016) believe that the Government’s commitment to information security needs improvement.

Robert Guice, Senior Vice President Shred-it EMEAA, said: “As we approach May 2018, it’s crucial that organizations of all sizes begin to take a proactive approach in preparing for the incoming GDPR. From implementing stricter internal data protection procedures such as staff training, internal processing audits and reviews of HR policies, to ensuring greater transparency around the use of personal information, businesses must be aware of how the legislation will affect their company to ensure they are fully compliant.”

“Governmental bodies such as the Information Commissioner’s Office (ICO), must take a leading role in supporting businesses to get GDPR ready, by helping them to understand the preparation needed and the urgency in acting now. The closer Government, information security experts and UK businesses work together, the better equipped organizations will find themselves come May 2018.” 

While risk oversight has always been an important part of the board’s agenda, the disruptive financial crisis of 2007-2008 taught everyone a lesson about just how important it is. In the aftermath of the global financial meltdown and credit crunch, risk oversight became an imperative for boards of public companies, particularly in the United States. Boards of listed companies on U.S. stock exchanges across all industries took a hard look at their membership, how they operated and whether their operations and the information to which they have access are conducive to effective risk oversight.

In addition, since the financial crisis, regulators have taken an active interest in board risk oversight. For example, the Securities and Exchange Commission in the United States requires that proxy disclosures shine the spotlight on the board’s role in overseeing the company’s risk management process, directors’ qualifications for understanding the entity’s risks and evaluation of the entity’s various compensation arrangements by the board’s compensation committee to ensure they are not encouraging the undertaking of excessive, unacceptable risks.

As a result, the risk oversight playbook has evolved over recent years, during which time many boards formulated their respective approaches to risk oversight and organized themselves accordingly. To that end, in 2009, the National Association of Corporate Directors (NACD) published its Report of the NACD Blue Ribbon Commission – Risk Governance: Balancing Risk and Reward. This report recommends 10 principles to assist boards in strengthening their oversight of the company’s risk management.



Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.

The Existing Environment

Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.

The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.



Some managed services providers and IT services companies may view the cloud as a threat, hesitant to relinquish control of their customers’ environments to larger players such as Microsoft, Amazon or Google.

But the cloud isn’t going anywhere, and enterprises are already demanding the features and benefits it provides.

Savvy MSPs and IT services firms aren’t resisting incorporating the cloud into their offerings—they’re embracing it!

Why are these companies willing to build solutions on someone else’s cloud infrastructure?



This is part 5 of a multi-part series on the Analytics Operating Model.

As many organizations embark on the transition from “proof-of-concept” to production, selecting the optimal data platform can be a task not for the faint of heart. According to Gartner, through the end of 2017, approximately 60 percent of big data projects will fail to go beyond piloting and will be abandoned. To further compound this statement, only 15 percent of those deployed projects will make it to production as compared to 14 percent in 2016. These numbers beg the question: Why can’t enterprises make the leap?

The traditional IT concept of maintaining infrastructures and applications for years no longer applies. As the needs of the enterprise mature, so should the ecosystem. Capabilities to handle real-time streaming data, in-flight transformation, and the shift from Extract-Transform-Load (ETL) to Extract-Load-Transform (ELT)/Schema-on-Read are no longer concepts, but reality that require adoption of newer tools and techniques to promote these trends.



The Business Continuity Institute

Welcome to the age of disruption, uncertainty and opportunity.

The development of technology is transforming every sphere of our lives. Societies, government and organizations are seeing change at a velocity which has not been witnessed in the history of mankind. The rules of every industry are being rewritten while geopolitics, regulations and globalization are making the world an increasingly complex place to survive and succeed. No one is exempt - including our industry.

Poised at the crossroads of change stands an opportunity. It is called India.

A sixth of world's population is expected to grow at a pace faster than the rest of world. 1.25 billion Indians have unprecedented energy and hunger to make a difference to this planet. It is also said that the world is coming to India and will continue to do so for some time. As a country, we are excited and humbled by this massive opportunity. We are also looking forward to the benefits coming from this. This massive tide of opportunity has the potential to lift everybody, promising to benefit individuals, organizations and entire industries.

If India continues on its present growth course, it could have a US$5 trillion economy within the next 20 years. But our national ambition is to perhaps go beyond that. This journey will have its own set of challenges and will not guarantee growth by default. Multiple industries and sectors will evolve in parallel and not always in synergy. At the same time India’s ambition will also collide with global realities at play. In this context, resilience will take a whole new meaning for India.

It is now our time to create our unique point of view in our domain and shape the future of our profession. Navigating this means constant exploration of the strands, fragments, dynamics and even contradictions that form a part of this unfolding narrative. We are grateful to the Business Continuity Institute for helping us start the 20/20 journey in India.

India is one of the BCI’s growth markets, and David Thorp, Executive Director at the BCI, recently said: “the 20/20 Groups are an integral part of the BCI’s aim to shape the future of our profession. This is a space to engage in provocative thought, play with ideas and engage with fellow experts. India holds so much potential in leading our profession and shaping future practice, and I’m counting on the India 20/20 Group to bring out those ideas worth spreading to the rest of the world.”

As such, thought leadership has never been more relevant. We are looking for fresh, powerful thought leadership for the India 20/20 Group, and our mission to achieve this is based on three core beliefs.

  • New thinking can create a better future for business continuity
  • Our ideas have the power to influence global paradigms on business continuity
  • Modern day thought leadership requires a broad base, engaging a more global audience

We believe the challenge for us is how to lead our audiences and stakeholders so that they can explore, alter and shift their understanding about India’s business continuity story and also contribute to the global Think Tank of business continuity professionals.

As leading practitioners of business continuity in India, I invite you to be part of this exciting journey. If you are curious and are able to open new paths through your expertise we can enable you to explore, engage and launch your own thought leadership. If you would like to find out more about the BCI 20/20 Think Tank India Group, or if you would like to submit your interest in becoming a part of it, just click here.

Together let us take business continuity forward.

Arunabh Mitra MBCI is the Chief Continuity Officer of HCL Technologies. He is involved in the leadership of the BCI Hyderabad Forum and also leads the BCI 20/20 in India.


Director of Technical Marketing, iland

It only takes one storm to change one’s life and business. Natural disasters strike with little to no warning and can be devastating to an organization’s operational and economic infrastructure. In today’s world of 24/7/365 always available business, if a business is down, even for a few hours, customers will not wait for recovery. They will find someone else and continue their business. The total cost of not just downtime but brand, reputation, and lost customers is monumental. The Federal Emergency Management Agency (FEMA) estimates that 40 percent of businesses do not reopen after a disaster and an additional 25 percent fail within one year. This failure rate is primarily due to business’ fundamental lack of preparedness.

Hurricane season officially began on June 1st and the National Oceanic and Atmospheric Administration is expecting the 2017 season to be above average. Businesses should prepare for the hurricane season by educating their employees and examining what best practices need to be adopted to maintain business continuity through disaster situations.

IT business continuity and disaster recovery in today’s world is no longer shipping backups to another location. In the event of a hurricane, entire infrastructures can be down for hours or days. What happens if power to the building is out for three days? What happens if there is no internet for a week? In extreme cases, actual damage to equipment can happen. Insurance can recover the cost of physical damage, but your business needs to be up and running as soon as something happens.

True disaster recovery and business continuity means the ability to quickly and reliably be running within minutes somewhere else. This doesn’t mean restoring a backup to a server standing by in some closet in another state, but actually moving entire operations near real-time and continuing the business. This can be done with a variety of cloud and software services that provide up-to-the-minute changes being moved to this secondary location. In an ideal situation, businesses can actually fail over their operations with little disruption ahead of any storm hitting. Once business is up and running in a safe location, the focus can return to employee and community safety knowing that business needs are already taken care of.

When crafting a business continuity plan for an IT organization, I suggest a five-step approach. Step one, understand the technology options available. Are backups sufficient or is a true disaster recovery (DR) solution needed? Step two involves categorizing IT systems. Which systems are most critical to day-to-day operations? Often, organizations will take a hybrid approach to data protection, employing a DR solution for mission-critical applications while protecting less-critical applications with backups. Step three is implementation. When considering implementation of a business continuity solution, consider how – and how much – to invest. Once assessed what it will take to keep the business running, pair it with the appetite for on premise investment, capital and operating expenses, and ongoing management. Step four is to build the business continuity plan. At this point there are a number of decisions to be made – what sorts of situations constitute a disaster for the company? Who can declare a disaster and enact the plan? What are the formal procedures?

The final step is testing. Just like a test evacuating a building for an emergency, a test of resilient IT infrastructure is important to not only gain confidence that it will work but to gain an understanding of how to accomplish complete business failover so, that in the case of a disaster, it really is as simple as clicking a few buttons.

Unlike sudden natural disasters such as tornadoes or earthquakes, hurricanes do allow you some lead time in order to enact a plan. What can’t be planned for is how long it will be before you can have your data center up and running again. Make sure wherever operations are running – be it a secondary location or a third-party cloud – it meets performance, security, and compliance needs.

Here are two examples of customers who understand their risk and have enabled true business continuity solutions in their environment.

Woodforest National Bank Finds a Summertime Home for Its Data

Woodforest National Bank, headquartered in Houston, Texas, experienced disaster during Hurricane Ike in 2008 losing power at its primary datacenter causing it to remain on generator power for 10 full days. Not wanting to experience that level of catastrophe again, its IT team transitioned from disaster recovery to disaster avoidance by pre-emptively “failing-over” all production applications to a secondary site in Austin, TX every June with a planned return to the primary site once hurricane season wraps up at the end of October.

What makes this failover of an entire datacenter a seamless action for Woodforest is its virtualized infrastructure, which operates at 95 percent virtualized level with a hypervisor and Zerto hypervisor-based replication. This combination facilitates a much faster and more error proof DR process, creating a strategy that is prepared to overcome any disaster.

R’Club Strengthens Its DRaaS Plan to Care for Children of First Responders

R’Club Child Care, Inc. is a not-for-profit childcare provider in the Tampa Bay, Florida area that cares for more than 4,000 children of first responders. R’Club’s IT team runs all its servers through on premise VMware and supports more than a dozen applications, all virtualized. While they run Veeam in their environment and back-up systems to a local SAN, they found that utilizing the off-site backup option through Veeam Cloud Connect helped them maintain mission critical IT applications at an affordable cost for the non-profit during times of disaster.

Prior to adopting a secure DRaaS with Veeam solution, R’Club worked with a local partner to lease space for replication with a nearby data center. R’Club used an off-the-shelf NAS device to copy their backups off-site. The process was cumbersome and error-prone, as the device would repeatedly fail and required rebooting. Further, off-site backups didn’t provide the assurance of ongoing availability that R’Club required. It would take hours or days to recover a system – and with their charter supporting first responders in the hurricane zone in Florida, that was time they couldn’t afford.

Now, if R’Club’s data center is swept away by a hurricane, the service provider can restore data through its BaaS operation.

Careful planning and understanding worst-case scenarios for business can help organizations build a comprehensive business continuity plan and disaster recovery strategy. Many companies have good intentions and start these plans, but fail to follow through. Now is the time to reflect on what is in place and consider if the current DR plan will get businesses through an unplanned disaster.

Before beginning a discussion on human-centered risk, it is important to provide context for why we must consider new ways of thinking about risk.  The context is important because the change impacting risk management has happened so rapidly, we have hardly noticed.  If you are under the age of 25, you take for granted the internet as we know it today and the ubiquitous utility of the World Wide Web.  Dial-up modems were the norm, and desktop computers with “Windows” were rare except in large companies.  Fast-forward 25 years … today we don’t give a second thought to the changes manifest in a digital economy for how we work, communicate, share information and conduct business.

What hasn’t changed (or what hasn’t changed much) during this same time is how risk management is practiced and how we think about risks.  Is it possible that risks and the processes for measuring risk should remain static?  Of course not, so why do we still depend solely on using the past as prologue for potential threats in the future?  Why are qualitative self-assessments still a common approach for measuring disparate risks?  More importantly, why do we still believe that small samples of data, taken at intervals, provide senior management with insights into enterprise risk?



(TNS) - If you’ve seen one hurricane, you’ve seen them all, right? Wrong.

Even the most seasoned Gulf Coast residents have something to learn about the amazingly complex and destructive storms that are hurricanes.

That’s why we asked an expert meteorologist to share his knowledge. Rocco Calaci is a partner and chief meteorologist at a weather technology company called MetLoop. He’s been studying the weather for 46 years now, and his daily email on Gulf Coast weather has thousands of readers.

Here’s six common misconceptions he said people have about hurricanes:



The software-defined data center (SDDC) is on the minds of many enterprise executives these days, but aside from the broad outlines of a fully virtualized infrastructure, there is little in the way of a clear direction as to how it should be architected or what operational parameters it should support.

One thing is certain, however: Organizations of all sizes are starting to view legacy data systems as one of the chief impediments to improved digital services, which themselves are seen as vital for success in the next-generation economy.

According to Wise Guy Reports, the SDDC market is projected to grow at a 22 percent compound annual rate between now and 2021, producing an estimated market of more than $81 billion. Not only is it expected to provide the scale and flexibility that many data users are getting from the cloud, the SDDC should also ease management overhead and provide more uniform resource allocation between compute, storage and networking, allowing organizations to streamline their consumption to more closely match their data requirements. Beyond that, however, the rationales for implementing the SDDC vary according to workloads, business models and a host of other factors, meaning that virtually every organization is likely to implement a highly customized version of the same basic concept.



(TNS) - Hurricane season arrives June 1, all too soon for those still trying to recover from last fall's Hurricane Matthew, but unveiling new forecast products designed to help everyone prepare for the next big one.

Emergency managers in both Volusia and Flagler counties are encouraged by the new products from the National Weather Service and the National Oceanic Atmospheric Administration. They include an interactive storm surge watch and warning graphic, and an experimental graphic estimating the time and date a storm's winds may arrive in a given region.

"This information really helps people to know what to expect during a disaster and I'm really hoping it saves lives," said Steve Garten, emergency services director for Flagler County.



Strong buildings, levees and seawalls play an essential role in increasing resilience to floods and hurricanes, but insurers are also looking to natural infrastructure to mitigate storm losses.

As the 2017 Atlantic hurricane season officially begins, an ongoing effort by insurers, risk modelers, environmental groups and academics is focused on understanding how natural defenses like coastal wetlands and mangrove swamps can reduce the impact of storms.

A 2016 study led by researchers at the University of California, Santa Cruz, the Nature Conservancy and the Wildlife Conservation Society, found that more than $625 million in property losses were prevented during Hurricane Sandy by coastal habitats in the Northeast.



Successful companies have always been customer obsessed. How to get them, how to keep them, how to make them advocates are all questions that have kept CEOs awake at night since commerce began. Today’s sophisticated data management platforms offer marketers the digital tools to aggregate and mine their data to answer those questions.

The core capability that every DMP offers is the identification of the characteristics of a company’s customers and prospects, addressing the “how to get them” question. Armed with that information, marketers allocate advertising resources and buy media based on those audience characteristics. It’s the very essence of programmatic buying.

But there’s more to marketing than media buying. The most effective marketing operations exercise control over every customer interaction, including email, website, social, call centers… every customer touchpoint, making sure they are relevant and delivering value because that’s how you keep customers. DMP leaders know that and are building, buying, and integrating to collect data from and deliver insights to every digital channel the CMO cares to include.



It’s summer time and nothing typifies it more in the United States than a parade on one of its summer holidays. Keeping with this tradition, the Acronis 12.5 release rolls out a parade of new features that help differentiate it in a crowded market. Leading its feature parade is the introduction of security software to authenticate preexisting backups; the flexibility to customize the names of archived backups; and, event-based backup scheduling all caught my eye as features that few other backup software products currently offer.

These days almost any product briefing with any backup software provider starts with some mention of how they deal with ransomware and, really, how can you blame them? Management is usually more aware of the quality of the coffee in the break room than their company’s ability to recover from backup. Ransomware has changed all of that. Suddenly having viable backups from which one can quickly and easily recover from a ransomware attack has the attention of everyone from department level managers to executives in corporate boardrooms.

While it can be said with some level of certainly that any properly configured backup software product provides some level of protection again ransomware, the ability of each backup software product to do so varies. Further, ransomware is rapidly evolving. Acronis recently became aware of at least one iteration of ransomware that is corrupting/infecting older backups that reside on disk. In this variation, even possessing older backups may not guarantee a good restore since the ransomware may infect those backups.

This is how the Acronis 12.5 feature parade begins: by setting itself apart using security software to authenticate backup files. Part of its new Acronis Active Protection feature functionality, the security software actively monitors preexisting backup files for any changes to them and compares them to the original state of the backup. If unauthorized changes to older backups are detected, it creates an alert and restores the backup to its original state.

The second feature found in Acronis 12.5 focuses less on engineering and more on Acronis listening to its customer base. Its beta testing for 12.5 was done by nearly 3,000 of its customers. Feedback out of that testing was their desire to give archived backups more meaningful names. Up to that point, archived backups were given an automated, predetermined name assigned by the Acronis backup software. In version 12.5, users have flexibility to assign names to archived backups that are easier to identify and use by applications other than Acronis.

The third feature up in the Acronis 12.5 parade of features is its introduction of event-based backup scheduling. Almost every administrator has had a moment of concern or regret after he or she decommissions a server, or applies a patch or upgrade to it and realizes they did not think to do a backup of it to do a fail back or recovery if necessary. Event-based backup scheduling takes this worry about forgetting to complete this step off their plate. By configuring this feature within Acronis 12.5 for protected servers, as soon as they initiate one of these activities, Acronis detects this task and completes a backup before the server is decommissioned, patched, or upgraded.

Young or old, everyone seems to love a parade in the summer as they watch the participants and anticipate the next float coming around the bend. The Acronis 12.5 release contains a parade of features with three of them providing organizations new benchmarks by which to measure backup software in both innovative and practical manners. Whether it is helping companies to better detect and protect against ransomware or giving them better, more practical means to manage backups in their environment, Acronis 12.5 provides some of the key new features that organizations need in today’s rapidly evolving world of data protection.


Having an effective business continuity programme does not just mean making sure your own organization has a plan in place to deal with disruptions, it also means ensuring that your supply chain is resilient too. How would your organization cope if your supplier was no longer able to supply, or perhaps their supplier and so on? As the saying goes: you’re only strong as your weakest link.

The 2016 Supply Chain Resilience Report, published by the Business Continuity Institute showed that one in three organizations had experienced cumulative losses of over €1 million during the previous year as a result of supply chain disruptions. Furthermore, the report showed that 70% of organizations had experienced at least one supply chain disruption during this same time period, while 22% had experienced at least eleven.

Has your organization experienced a disruption to its supply chain? What were the causes and consequences of those disruptions? Help inform the next Supply Chain Resilience Report by taking a few minutes to complete the survey, and be in with a chance of winning a £100 Amazon gift card.

Find the survey here: https://www.surveymonkey.co.uk/r/supplychainresilience2017

You can read the previous report here: http://www.thebci.org/index.php/bci-supply-chain-resilience-report-2016

The Business Continuity Institute

The Business Continuity Institute has provided further evidence that attaining the CBCI credential greatly enhances your career prospects. The latest Salary Benchmarking Report revealed that those who are certified BCI members, on average, get paid considerably more than their non-certified colleagues.

The BCI's Global Salary Benchmarking Report is a study of over 1,000 business continuity and resilience professionals that seeks to discover the remuneration packages that those in the industry receive, whether it is salary, bonus or other benefits. In addition to the global report, there are also region-specific reports for Australasia, Europe, North America, UK and USA.

In Europe, those who are members of the world’s leading Institute for continuity and resilience can earn, on average, 45% more than those who aren’t members, while in North America they can earn 21% more, and 19% more in Australasia. The study showed that Switzerland is the country to work in if you want to command the top salary, while North America was the top region overall.

“The CBCI credential has become the certification of choice for those entering the business continuity profession, and this is why 1,200 new professionals attain it each year,” said David Thorp, Executive Director of the Business Continuity Institute. “The study shows just how highly regarded the CBCI is, with organizations willing to pay their employees more for successfully achieving it.”

There is an extremely healthy outlook for those in the industry, with the majority of business continuity and resilience professionals indicating their contentment with their career as more than half (54%) have a positive outlook, while only 6% have a negative outlook. More than 7 in 10 respondents (72%) express being either satisfied or very satisfied in their current role, while only 1 in 10 (10%) stated they were dissatisfied or very dissatisfied.

It is perhaps this overall contentment that is the reason why only 14% of business continuity and resilience professionals have changed employers in the last twelve months, while more than 6 in 10 have no plan to change employers over the next twelve months.

The report did reveal, however, a considerable gender gap with males getting paid far more than females. The highest gap was in Europe with a 64% difference between the two.

When one looks at today’s lineup of software products that one would classify as cloud data protection, one might assume that every such product natively offers source side deduplication. If you do, you would be wrong. Software such as HPE Data Protector does not natively offer source-side deduplication but its reasons for opting out make sense once one takes a deeper look at the product.

Currently DCIG is conducting research into cloud data protection in anticipation of releasing Buyer’s Guides related to that topic. HPE Data Protector is one such product that offers cloud data protection through integration with Microsoft Azure as well as its own HPE Helion cloud storage offering. However, in researching Data Protector, I was a bit surprised to learn that Data Protector does not natively offer source side deduplication.

Now, please note the nuance of what I am saying here. HPE Data Protector DOES support source side deduplication. To get this functionality, one must purchase either an HPE StoreOnce or a Dell EMC Data Domain deduplicating backup appliance. Each of these products offers an agent that then does client or source side deduplication.



Long before the explosion of data, long before The Oxford Dictionary recognized “cybersecurity” as a word; and long before every device known to man could connect to the Internet, most businesses simply built their own data centers.

Today, creating and planning a data center strategy has become incredibly complicated. Companies now have a whole slew of choices: modernize an old facility, build a new one, lease, use colocation and/or cloud services and any number of combinations. Knowing which option will best fit the needs of the business starts with asking the right questions.

Tim Kittila, director of data center strategies for Parallel Technologies, says when he first meets with customers he asks the following questions about their business goals and objectives (not data center goals) and IT growth projections.



If confidential information didn’t exist, you wouldn’t have to worry about data breaches. . If the vectors for malware were eliminated, your organisation and its employees would no longer be at risk from malware infection, and the loss and damage that can go with it.

However, rubbing out all such information would likely destroy assets of considerable value, reduce competitive advantage to zero, and even make it impossible to do business.

Likewise, isolating an organisation from all external communication would be a recipe for disaster. A more balanced approach to information deletion or rejection is therefore required.



While the world was responding to the WannaCry attack — which only utilized the EternalBlue exploit and the DoublePulsar backdoor — researchers discovered another piece of malware, EternalRocks, which actually exploits seven different Windows vulnerabilities.

Miroslav Stampar, a security researcher at the Croatian Government CERT, first discovered EternalRocks. This new malware is far more dangerous than WannaCry. Unlike WannaCry, EternalRocks has no kill switch and is designed in such a way that it’s nearly undetectable on afflicted systems.

Stampar found this worm after it hit his Server Message Block (SMB) honeypot. After doing some digging, Stampar discovered that EternalRocks disguises itself as WannaCry to fool researchers, but instead of locking files and asking for ransom, EternalRocks gains unauthorized control on the infected computer to launch future cyberattacks.



The Business Continuity Institute

It’s true to say that without the many volunteers who give their time to the BCI globally we would not be where we are now; it is the efforts of volunteers that have fuelled the engine of our growth since our formation over twenty years ago.

Whatever stats show on the economic value of volunteers, it would be wrong to imply that the primary reason to use volunteers is to reduce the cost of paid staff. This organization is owned by its members and relies on the work they carry out across so many of our activities. Indeed it’s true to say that the paid staff of the Institute are here to help those members who volunteer their time and their skills and not the other way around. No measurement of economic value takes into consideration the knowledge and experience these volunteers bring with them.

Volunteers enable us to engage a more diverse range of people across the resilience community, and make the most of their expertise. As an organization headquartered in the UK but operating within a global environment, it is important that we are inclusive to ideas, local knowledge and cultural insight from all quarters, and volunteers support us in this. With members in over 100 countries and a Central Office of only 25 employees, even though they do represent several nationalities, it would be extremely difficult for the BCI to have the impact it does in many of these countries without volunteers from within our membership community playing a major part.

With over 70 community groups worldwide, there are hundreds of business continuity and resilience professionals offering up their own time to help enhance the reputation of the Institute, and the industry as a whole. These volunteers help promote the highest standards of both professional and technical competency, and facilitate networking opportunities to enable business continuity professionals to come together and share good practice, exchange ideas and build valuable relationships. Volunteers also act as our ambassadors, helping to support the regional growth of the BCI by extending our reach.

Volunteers help inform the development and delivery of our activities and services by bringing in fresh opinions, ideas and approaches, as well as subject matter expertise. This helps us to make sure we are relevant to the industry and those who work in it. Take our Good Practice Guidelines for example; these are currently being reviewed in order to launch a revised edition at the BCI World Conference in November, and this review has largely been carried out by volunteers, about 60 of them from across the world, each ensuring that global good practice is just that – global!

Of course, volunteering doesn’t just offer a benefit to the BCI, we’d like to think it provides value to the volunteer as well. There are many reasons why people choose to offer their time to support the BCI. For some it is simple altruism and the enjoyment of giving back to their community. For others it is the networking opportunities that volunteering brings. For many it is the chance to develop new skills, perhaps even to be able to include it on their CV. On top of that, from a well-being point of view, studies have shown that volunteering can lead to enhanced self-esteem, reduced stress and improved health.

We, as an Institute, are heavily reliant on the work carried by our incredible volunteers, and this is reflected in the responsibilities they are given. However, as an organization, we have a responsibility toward them too. A responsibility to invest in volunteering and ensure they are getting as much support from us as they need. I’m always open to ideas, so if you have any suggestions on what we can do to better support you then please do get in touch.

In the UK, we have just started Volunteers’ Week, a week-long celebration of all that volunteers do for the benefit of others. The theme of the week is ‘you make the difference’ and that is certainly the case with the BCI. We are extremely grateful for all our volunteers, and owe them a debt of thanks for everything they do to support the aims of the BCI.

David Thorp
Executive Director of the Business Continuity Institute

The Business Continuity Institute

The latest Salary Benchmarking Report published by the Business Continuity Institute has shown a clear gender pay-gap across multiple demographics within the business continuity industry. The report suggests that the profession, and arguably society as a whole, contains some major disadvantages that need to be addressed urgently.

The BCI's Global Salary Benchmarking Report is a study of over 1,000 business continuity and resilience professionals that seeks to discover the remuneration packages that those in the industry receive, whether it is salary, bonus or other benefits. In addition to the global report, there are also region-specific reports for AustralasiaEuropeNorth AmericaUK and USA.

Perhaps the most alarming finding of the report is that Europe has the most notable pay-gap between genders as, on average, males earn a salary that is 64% higher than females. In North America they earn 24% more, while in Central and Latin America the gap is 19%. In Sub-Saharan Africa and Australasia the gap drops to 12% and 11% respectively. In the Middle East and North Africa, the gap is significantly reduced with only 3% difference between males and females. The report identified that only in Asia did females, on average, earn more than males.

When the results are broken down by level of education, regardless of whether the respondents had the equivalent of A-levels, an undergraduate degree or a postgraduate degree, males still earned more than females. For those with A-Levels, or their equivalent, there is a 7% gap, and for those with a postgraduate degree there is an 11% gap. However, for those with an undergraduate degree, males earn a third more than females.

Analysing the results on the basis of age shows that the difference in the ‘18-34’ category was marginal, but it increased to 16% in the ‘35-44’ category, and up to 25% in the ‘45-64’ category, showing that the gap widens as careers progress. Or, more to the point, it perhaps suggests that females are not progressing in their career at the same pace as males.

Experience also affected the gender pay gap. One of the few categories where females had a higher salary than males was in the ‘0-9 years' experience’ category, but this soon changed as males with ‘10-19 years' experience’ earned about a third more than females in the same category. The gap narrowed again as males with ‘20-29 years’ experience’ and ‘30+ years' experience’ earned 21% and 14% more respectively.

Whatever way the data is broken down, in the vast majority of cases, males receive greater remuneration than females, even when they are at the same level. Of course there may be other factors involved, but the results very much suggest an imbalance in pay between male and female business continuity professionals.

“As a profession we need to do more to ensure that there is diversity and equality,” said David Thorp, Executive Director of the BCI. “We should not have barriers in place that exclude 50% of the population from wanting to be a business continuity and resilience professional, and clearly taking home less pay at the end of the month is a barrier.”

Founded in 1994 with the aim of promoting a more resilient world, the Business Continuity Institute (BCI) has established itself as the world’s leading Institute for business continuity and resilience. The BCI has become the membership and certifying organization of choice for business continuity and resilience professionals globally with over 8,000 members in more than 100 countries, working in an estimated 3,000 organizations in the private, public and third sectors.

The vast experience of the Institute’s broad membership and partner network is built into its world class education, continuing professional development and networking activities. Every year, more than 1,500 people choose BCI training, with options ranging from short awareness raising tools to a full academic qualification, available online and in a classroom. The Institute stands for excellence in the resilience profession and its globally recognised Certified grades provide assurance of technical and professional competency. The BCI offers a wide range of resources for professionals seeking to raise their organization’s level of resilience, and its extensive thought leadership and research programme helps drive the industry forward. With approximately 120 Partners worldwide, the BCI Partnership offers organizations the opportunity to work with the BCI in promoting best practice in business continuity and resilience.

The BCI welcomes everyone with an interest in building resilient organizations from newcomers, experienced professionals and organizations. Further information about the BCI is available at www.thebci.org.

Imagine a remake of the original Indiana Jones movie, Raiders of the Lost Ark. Eerie music is playing during that iconic final scene. The camera closely follows a clerk pushing a crate containing the Ark of the Covenant. It pans out. In the 21st Century version, the vast warehouse doesn’t contain endless rows of shelving and boxes. It is completely empty except for a desk, a computer and a shipping bay. Cut to a UPS truck arriving at the bay to take the crate to an unknown destination.

The IT version would be a data center manager sitting at a computer console in an otherwise empty basement. All the hardware and software that used to be neatly arranged in rows within that data center is now in the cloud. Could this be the fate that awaits us if Everything-as-a-service (XaaS) progresses to its logical conclusion?

“That vision of a manager in an empty data center could eventually apply to some businesses,” says Colm Keegan, a senior analyst at Enterprise Strategy Group. “In those cases, the data center manager’s job would be the overseer of external providers to ensure the enterprise received the performance and capabilities it required.”



Resilience is the key for any business wanting to thrive in an ever-changing world. A new standard just published will help put organizations in a better position to meet the challenges ahead.

Climate change, economic crises and consumer trends are just some of the pitfalls that can dramatically affect the way an organization does business and survives. Organizational resilience is a company’s ability to absorb and adapt to that unpredictability, while continuing to deliver on the objectives it is there to achieve.

A new standard, ISO 22316, Security and resilience – Organizational resilience – Principles and attributes, provides a framework to help organizations future-proof their business, detailing key principles, attributes and activities that have been agreed on by experts from all around the world.



Data scientist, named the best job in America for 2016 by the job site Glassdoor, is a mashup of traditional careers, from data analysis, economics, and statistics to computer science and others.

Although tech companies Microsoft, Facebook, and IBM employ the most data scientists (227, 132, and 98, respectively), according to a report by RJ Metrics, these professionals are also in demand in non-tech sectors. Kohl’s, AAA, and Publisher’s Clearing House are all searching for at least one on Glassdoor.

It’s no surprise that most people who choose this career begin by studying science, technology, engineering, and math (STEM)–subjects at the very core of innovation and emerging high-tech fields. The positive contribution these subjects make to the US economy and to the nation’s competitiveness in the global high-tech marketplace are undeniable.



Wednesday, 31 May 2017 15:29

Big Data Experts in Big Demand

Traditional crisis management is adjusting to the new “Always On” era. In this new landscape, it is critical to be proactive and adaptive in managing events that pose threats to your organization. An effective Crisis Management Program is driven by situational awareness, effective communications, and constant testing exercises. 

The key components of a complete Crisis Management Program include:



DataCore recently released the findings of its latest survey, which point to some disappointment in the technologies that are purported to enable next-generation storage workloads and make life a little easier for overworked IT administrators.

After quizzing 426 IT professionals, the software-defined storage (SDS) vendor found that nearly a third (31 percent) suffered false starts or were generally disappointed by how cloud storage failed to yield cost savings. Twenty-nine percent of respondents found object storage difficult to manage.

For some, flash didn't to live up to its performance-boosting reputation. Sixteen percent said their SSD-packed arrays and other flash-filled storage systems fell short of accelerating their applications.



Wednesday, 31 May 2017 15:27

Top Storage Technology Letdowns

Venues that attract crowds, such as large sports events and concerts are reviewing their security measures, both inside and out, to prevent an attack such as the suicide bombing after an Ariana Grande concert in Manchester, England, that killed at least 22 people.

Most venues have strict rules about bags, backpacks and coolers. Some check items thoroughly before allowing them inside an arena and others do not permit them at all. Venues also employ security detail to check those attending events as well as plainclothes detail to monitor the crowd. In the Unites States, the Department of Homeland Security warned that the U.S. public may experience increased security at public events.

Hong Kong’s AsiaWorld Expo, where Ariana Grande is scheduled to hold a concert in September, said it plans to improve security at all concerts and events. Besides baggage inspection, there will also be metal detectors and search dogs, it said in a statement.



Wednesday, 31 May 2017 15:26

Large Venues Reviewing Security Measures

(TNS) - As soon as forecasters classified last year's Hurricane Matthew as a Category 4 cyclone with 140 mph winds just off Florida's east coast, Brevard County, Florida's emergency management staff knew it was time to make some decisions.

A little less than 30 hours before the hurricane was projected to hit nearby, on a Wednesday morning, Brevard County instructed its vulnerable residents to evacuate. They opened 15 shelters and prepared for the worst. But by Friday morning, the storm had deviated slightly from its projected track, remaining just offshore, leaving the area, and much of the state, with less damage than expected.

A palpable excitement from residents who narrowly missed major damage from a severe hurricane could rightfully be expected. Not so, said Brevard County's operations coordinator, John Scott. Instead, he found complaints of inconvenience. It was as if people wanted to come back to devastation, he said, shaking his head.



Despite progress made since the Zika and Ebola crises, most countries are not adequately prepared for a pandemic and are still investing too little to strengthen preparedness.

A report by the International Working Group on Financing Preparedness (IWG), established by the World Bank, finds that the investment case for financing pandemic preparedness is compelling.



Do you live or work in the eastern parts of the United States? If so, then you likely have a keen interest in the NOAA forecast for 2017. The Atlantic hurricane season could see between two and four major hurricanes in 2017, according to the latest forecast from National Oceanic Atmospheric Administration’s (NOAA) Climate Prediction Center. There’s only a 20 percent chance that this season will be less active than normal.

The Atlantic hurricane season officially begins June 1, but one named storm, Arlene, already hit land last month.

NOAA expects between 11 and 17 named storms (with sustained winds of 39 mph or higher), and from five to nine hurricanes (winds of 74 mph or higher) this season.



Now that so many people and enterprises have rushed headlong into mobile, cloud, or both, it’s time to take a step back and consider your security posture relating to these two items. It’s an unfortunate fact that when cloud and mobile are used together, their security risks are not just additive, but multiplicative.

Mobile software and devices are already targets for malware, a problem compounded by the naivety of end-users when downloading apps and the lack of discipline in keeping mobile antivirus software up to date.

Cloud databases in an environment of shared access and multitenancy are also attractive to hackers. How should organisations deal with such threats?



Improved compliance programs, sufficient resources and board access have meant fewer concerns about personal liability for compliance executives, according to a study by DLA Piper.

In its 2017 Global Compliance & Risk Report, DLA Piper found that 67% of chief compliance officers surveyed said they were at least somewhat concerned about their personal liability and that of their CEOs, which was down from 81% in 2016. And 71% said they made changes to their compliance programs based on recent regulatory events, up from just 21% a year earlier. The study found that globally the compliance function is becoming more independent and prominent in large organizations.



The Business Continuity Institute

By Tony Hisgett from Birmingham, UK (BA Terminal Heathrow) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Both Heathrow and Gatwick Airports, two of the busiest airports in the UK, were in chaos last weekend as a "catastrophic IT failure" at BA caused many flights to be cancelled. About 300,000 passengers were affected as a result, showing once again how important it is for organizations to have a business continuity programme in place to help manage through such disruptions.

An outage like this shouldn't come as a surprise to organizations. After all, the Business Continuity Institute's latest Horizon Scan Report, featured IT and telecoms outages third on the list of threats that business continuity professionals are most concerned about. For anyone trying to fly in or out of Heathrow or Gatwick, it is clear just how disruptive they can be. Organizations must therefore be prepared to deal with the possibility that one will occur.

While effective business management isn't necessarily about carrying on as normal, but ensuring that priority activities can still be carried out in the event of a disruption, it is hard to believe that one of the priorities for BA wasn't to keep its flights running at two busy airports, on a bank holiday weekend at the start of the half-term school holidays.

To be fair to BA, as an airline they must put passenger safety as their number one priority, and this could mean that the alternative arrangements other organizations would put in place simply aren't an option for them.

"This outage is yet another demonstration of how reliant we are on our IT systems and the severe disruption it can cause when they go down," said David Thorp, Executive Director of the Business Continuity Institute. "With such reliance, we need to be prepared for the inevitability that at some point they will go down, and so have plans in place to prevent it turning into a crisis."

If an organization is so reliant on its IT systems, as BA clearly are, what processes are in place to ensure that, when the IT goes down, work can still continue? It was reported that the IT failure was the result of a loss of power, if this was the case, where was the back-up power supply? If there were processes in place, had they been tested to make sure that they work?

What has annoyed people most about this particular incident, and many others like it, is not the disruption itself, most people recognise that these things happen. Instead it was the lack of communications as those who experienced the worst of the disruption felt that they were not being told anything. Organizations need to make sure that crisis communications are part of the planning process so that when things go wrong, stakeholders are kept fully informed. While IT systems can soon be recovered, a damaged reputation can take even longer to recover.

There has always been some degree of risk involved in transporting dangerous goods (DG)/hazmat, with the responsibility for compliance typically assigned solely to the compliance or shipping department. Today, with more than 1.4 million DG shipments being made daily in the U.S. and a greater number of goods now classified as hazardous, that risk has multiplied exponentially — and so have rules and regulations.

The challenge of ensuring compliance with these complex and changing regulations is made even more difficult due to shifts in responsibility within many organizations, with the role of hazmat compliance now often involving a number of divisions, including IT, supply chain, compliance, warehouse, shipping, EHS (environmental, health and safety) and more.

The path to safety and compliance requires a commitment to developing the necessary infrastructure, establishing the right processes and having the right personnel to carry it out.

The result is not just enhancing your company’s brand by being a good corporate citizen; it can give your business a competitive edge and boost your bottom line by helping to reduce costs, mitigate risk and virtually eliminate penalties and fines due to violations and rejected shipments.



The world is littered with thousands of examples of the problems associated with data center strategy mistakes around capacity and performance.

For example, Lady Gaga fans brought down the vast server resources of Amazon.com soon after her album “Born This Way” was offered online for only 99 cents. Similarly, a deluge of online shoppers caused the data center to crash after they bombarded Target.com for a mammoth sales event. And, of course, there was the famous healthcare.gov debacle, when an ad campaign prompted millions of Americans to rush to the website for healthcare coverage only to face long virtual lines and endless error messages. In total, it is estimated that more than 40,000 people at any one time were forced to sit in virtual waiting rooms as available capacity had been exceeded.

Each of these examples highlights why data center managers have to make sure their data center strategy stays ahead of organization expansion needs as well as watching out for sudden peak requirements that have the potential to overwhelm current systems. The way to achieve that is via data center capacity planning.



Cybercrime is one of the biggest challenges society faces today.

As the world becomes more digitized and dependent on connected systems and devices, the threat and the potential impact is exponential.

As we just recently witnessed the WannaCry attack, this is a wake-up call and we should expect to see global attacks of this nature accelerate.

There is good news.



Thursday, 25 May 2017 14:45

CEO Forum: WannaCry Raises the Red Flag

Having the right business continuity tools can make the work you do on your BCM program easier and more consistent. In this post, we’ll explore categories of tools that will make your program more efficient and help you be prepared to respond effectively to a crisis event.

Here at MHA Consulting, we have had the opportunity to see multiple business continuity tools in action. While we strive to be tool-agnostic, not necessarily recommending any single tool, we do work with our clients to ensure that the tools they use will best meet their needs and requirements.

There are many providers in these spaces; the ones listed are those we are familiar with through use in client engagements or other situations. A review of these tools may be a good place to start.



The Business Continuity Institute

With only one year to go before the European Union General Data Protection Regulations (GDPR) deadline, many US businesses with European customers are not fully prepared to comply with the new laws, which include ‘Right to be Forgotten’ customer consent mandates and regulations on how customer data is handled. US companies, or any organization that stores data on EU citizens, will face hefty fines or lawsuits if they don’t fully comply - up to 4% of annual turnover or €20 million, whichever is greater.

US large-company CIOs saying they are well-briefed on the impending laws, up from 73%, when asked the same question last year. However, only 60% have detailed plans in place to address the new laws’ requirements. This is up from 33% from last year’s survey, but suggests there is still significant work ahead.

94% of the large US company CIOs surveyed say their companies have personally identifiable information (PII) on EU customers, making the new mandates applicable to them.

Particularly challenging is the mandate to obtain customer permission to use PII in application testing, a critical part of software development. 55% of US firms have a plan in place to address this, but nearly one-third say they don’t fully understand the impact of this ruling.

The data complexity of modern systems is also an issue, as 85% admit it’s sometimes difficult to know exactly where all their customer data resides, an increase from last year’s survey with 78% then admitting that difficulty.

“US organizations are heading in the right direction on GDPR compliance, but there is still work to be done to improve data governance capabilities,” said Chris O’Malley, CEO of Compuware. “Manual processes that are used to locate and protect customer data must be replaced with automated capabilities that enable businesses to quickly, accurately and visually manage data privatization and protection.”

The findings also reveal US organizations are better prepared for the GDPR than their European counterparts. Compared to the 60% of US companies saying they have detailed and far-reaching plans in place, only 19% of UK companies have such plans prepared, a modest improvement of only 1% since last year.

US respondents ranked their biggest GDPR compliance hurdles to overcome as follows:

  • Design and implementation of internal processes (65%)
  • Securing customer consent to use their personal data and handling the process of data withdrawal if requested by the customer (64%)
  • Ensuring data quality (52%)
  • Cost of implementation (43%)
  • Data complexity (41%)
Thursday, 25 May 2017 14:35

BCI: US more prepared for GDPR than UK

The Business Continuity Institute

As companies outsource processes and services, they expose themselves to a plethora of third-party risks. Whether it's data security, business disruptions or compliance risks, organizations must have the relevant measures in place to mitigate their potential impact on business continuity and reputation.

A report my MetricStream however, shows that one in five respondents to a survey (21%) reported that their organization has faced significant risks due to third-parties during the last 18 months. Of those that shared financial impact data on the losses, a quarter said that the loss was greater than £8 million (generated through cost of downtime, regulatory fines and reputational damage).

How organizations are managing third-party risk also revealed that nearly three quarters (73%) of businesses do not track fourth-parties, meaning they have no visibility past their immediate suppliers. This finding emphasises some of the concerns raised in the Business Continuity Institute's latest Supply Chain Resilience Report which revealed that only two-thirds of organizations maintain adequate visibility over their full supply chain.

French Caldwell, chief evangelist at MetricStream, commented: “As companies continue to outsource their processes and services in order to decrease costs, streamline or scale up quickly, they are opening themselves up to risks. However, despite some supplier incidents costing upwards of £8 million, 44% of the respondents said that their business had no dedicated third-party risk management function. Furthermore, as enterprises rapidly adopt cloud services, entities that would have been third-parties when the services were managed in-house become fourth parties which are more difficult to monitor.

“Businesses can no longer plead ignorance. They are responsible for the actions of their third-parties and they will bear the brunt of any fallout. For example, if a business shares sensitive data with a third-party without checking if it has relevant cyber security, and that supplier suffers a data breach, under some rules the company could be liable. Not only will it suffer reputational damage, but new regulations such as the EU GDPR could see large fines imposed too."

I’ve said it before, and I’ll say it again: All companies, no matter the size or the industry, will eventually be targeted by hackers, cybercriminals and other bad actors. At the same time, more and more instances of cyberattacks are being carried out against high-ranking executives, many of them C-level executives and directors. Not only do these individuals have access to a company’s most sensitive and confidential information, but often, they have the least amount of oversight and the worst cybersecurity habits.

For a corporation, falling victim to such attacks is damaging enough for obvious reasons (just ask Yahoo!), but for a high-ranking business leader, the fallout is particularly embarrassing, as it signals a clear lack of awareness about basic security precautions. Further, leadership is being held increasingly accountable for a wide swath of security missteps, a narrative that all too frequently plays out in news headlines and almost always ends in the loss of a job, an investigation or legal action.

With all of these consequences considered, one would hope that leadership is scrambling to close critical security gaps. But new research from Diligent and the New York Stock Exchange’s Governance Services paints a starker picture.



Not All Emergency Notification Systems Are The Same

Does your company have a modern mass communication system? When I say “modern,” I am referring to one that doesn’t rely solely on email or phone; one that is able to contact employees on multiple devices simultaneously; one that can be activated in a matter of seconds and reach its intended audience within minutes. I’m going to add another feature in the mix because it is so invaluable when it comes to reaction time – interactive maps.Interactive maps use GPS to track and monitor employees and events – not in a creepy, big brother way but in a way that ensures employees are safe and accounted for no matter where they work. GPS can provide more immediate location information to help first responders to act quickly when seconds count. Think of it this way: if you were working in a location where an emergency struck, would you be uncomfortable or thankful that your employer was sending help to your exact location within seconds of the incident?



(TNS) — California will probably introduce a limited public earthquake early warning system next year, researchers building the network say.

Earthquake sensing stations are being installed in the ground, software is being improved and operators are being hired to make sure the system is properly staffed, Egill Hauksson, a seismologist at the California Institute of Technology, said at a joint meeting of the Japan Geoscience Union and American Geophysical Union.

The new stations are particularly important for rural Northern California, where gaps in the network have put San Francisco at risk for a slower alert if an earthquake begins on the San Andreas fault near the Oregon border and barrels down to the city. Last summer, California lawmakers and Gov. Jerry Brown approved $10 million for the early warning system.



(TNS) - When Hurricane Opal plowed through the Florida Panhandle, those who chose to ride out the devastating storm were left without power and resources for days.

That was back in September 1995. Kelly Jo Bailey, disaster program manager for the local American Red Cross, was living in Bay County at the time of the fatal and historic Category 4 hurricane. She said the community was changed in an instant of the storm making landfall.

"It brings the best and worst out of people," she said. "You can have people stealing or looting, but you also have neighbors helping neighbors."

Bailey, along with several volunteers and other emergency response agencies, were sharing information and assistance Saturday during Hurricane Preparedness Day at the Panama City Mall, 2150 Martin Luther King Jr. Blvd. With hurricane season starting June 1, officials were urging passers-by in the mall to have supplies on hand and make a plan before a serious storm rolls through.



(TNS) — Mass casualty events — ranging from the slaughter of 26 innocents at Sandy Hook Elementary School in 2012 to the bombs that killed three and injured more than 250 at the Boston Marathon in 2013 — have highlighted the need for a well-trained citizenry, according to a doctor promoting bleeding control techniques.

After chronicling several other mass casualty attacks in the United States and Europe, Dr. Lenworth Jacobs said, “It’s a big problem, and it’s getting worse.”

Jacobs, speaking to roughly 250 people at a trauma care symposium that Gundersen Health System sponsored Friday at Western Technical College in La Crosse, noted the attacks at schools besides Sandy Hook, including Columbine and several colleges and universities.



When people hear the word “compliance,” they often imagine red tape and a governing body restricting your free will – so suffice it to say, it’s not the most pleasant word in the English dictionary. But compliance is so much more than the equivalent of a teacher’s pet making you stay late after school. It’s an essential part of business practices, and failure to be compliant can lead to penalties, fraud and the loss of your business. There are many challenges that come with navigating the murky waters of compliance; luckily the dawn of technology can help solve those problems.

Bolstering Corporate Compliance with a Solid BPM Strategy

In the wake of cybercrime and corporate scandals, the age-old concept of compliance ensures companies act responsibly and are protected. With tighter compliance comes reduced legal problems, improved operations, higher productivity levels and greater employee retention.

A key area in compliance that often gets shoved under the rug is within Business Process Management (BPM) – a systematic approach to making an organization’s workflow more effective, more efficient and more capable of adapting to an ever-changing environment. Compliance in this area consists of a) demonstrating that you have documented your process, b) demonstrating that you have followed that process, c) demonstrating you have visibility over all processes and d) demonstrating that you can spot cases in which processes were not followed.



I have gotten some inquiries about where spending on artificial intelligence and cognitive technologies occur in our tech market numbers (see, for example, "US Tech Market Outlook For 2017 And 2018: Mostly Sunny, With Clouds And Chance Of Rain").  The short answer is that we include them in our data on business intelligence and analytics, though so far  spending on these technologies is still small -- probably than a billion dollars for 2017.

But even as artificial intelligence spending grows, it is likely to remain small in terms of visibility.  That's because artificial intelligence solutions are likely to be functions in existing software products, and not something that firms buy directly.  Put another way, the biggest buyers of AI will probably be software, services, and hardware vendors, who use AI to help their products and services work better.

There is precedence for this pattern in the BI and analytics market.  My Forrester colleague Boris Evelson has been collecting data from the leading BI vendors as to the percentage of their revenues that they get from end customers versus from OEMs (original equipment manufacturers).  On average, about 10% of these vendors' revenues come from sales to OEMs.  And that could well be understated, because vendors like IBM, Microsoft, Oracle, or SAP don't provide data on the explicit (or more likely implicit) value of their analytics products that are used in their applications.



Let me pose a question: “Is it a bad thing to give the average person a hand grenade with the pin pulled?” I think most of us would respond to that question with an emphatic “YES!”  No one in their right mind would think it's a good idea in any possible reality to allow anyone without extensive military or professional training to access an explosive--especially not one that is live and has no safety device in use. Bad things would happen, and people would probably lose their lives; at the very least, there would be damage to property. No matter what, this scenario would be a very bad thing and should NEVER happen.

OK, now let me change that question a bit: “Is it a bad thing for every person with a network connection to have access to extremely powerful nation-state-level cyber weapons?”  Hopefully you would respond similarly and say “YES!”

Just as the hand grenade juggling is a problem, so is the proliferation of nation-state-level exploits. These malicious tools and frameworks have spread across the world and are presenting a very complicated problem that must be solved. Unfortunately, the solution that we've currently been offered amounts to a variety of vendors slinging solutions and tools that, without good strategy, cannot effectively combat the myriad cyber artillery shells now being weaponized against every system that touches the World Wide Web. The bad guys have now officially proven that they can “outdev” the defensive technologies in place in many instances and have shown that it's highly likely that many installed legacy technologies are wide open to these weaponized attacks (anti-virus be darned) across the planet.



The Business Continuity Institute

In 2016 global supply chains continued to face a range of security, social responsibility, and business continuity risks, with many of these issues provoked by one another, according to BSI's Global Supply Chain Intelligence Report.

The report noted multiple incidents that started out as a security, social responsibility, or a business continuity risk that cascaded into other supply chain issues. The European migrant crisis is perhaps the best example of a type of event that began as a single security risk, before building into a business continuity disruption as countries imposed border controls, which in turn was exacerbated by blocked migrants looking for work, often falling victim to forced labour in certain nations. As risks, such as the migrant crisis, continue to evolve, it's imperative that organizations work together to take a holistic risk management approach to ensure they are informed and prepared to address multiple areas of concern.

In 2016, governments in Asia responded to increasing levels of supply chain risks, but many policies were merely reactive and often led to further threats to the integrity or continuity of the supply chain. BSI observed a shift in labour strike threats in China in 2016, driven mainly by concerted government efforts to limit strikes in the country following years of increasing labour disruption. Labour strikes still occurred in large numbers across China last year, but the number of strikes dropped in 2016 for the first time in recent years. Strikes at factories dropped by 31%; with two-thirds of provinces – including major apparel, consumer goods, and electronics production hubs – witnessing a decline in manufacturing strikes. An emerging area of concern is the growth in strikes in the logistics sector, including trucking, shipment processing, and delivery, which rose more than fourfold from nine incidents in 2014 to 40 last year.

Asia also saw an increase in labour rights concerns in Bangladesh in both the ready-made garments sector and in other industries. A December 2016 survey of the Dhaka slums found a far higher incidence of child labour than previous government studies had suggested, with 15% of children employed in formal and informal enterprises. Additionally, the survey found that a significantly larger proportion of children were employed in the formal RMG sector than had been previously believed. The study also documented abusive practices in garment factories that employed children. Over 37% of girls reported being forced to work overtime, while children employed in the formal garment sector earned only half the national minimum monthly wage for garment workers.

Europe experienced significant terrorist attacks in Nice, France in July and Berlin, Germany in December, along with dozens of counter-terrorism arrests across Europe in 2016. Those attacks in particular also underscored the threat that terrorists will exploit the supply chain to perpetrate attacks. In both cases, Tunisian men linked to the Islamic State in Iraq and Syria (ISIS) used cargo trucks to ram into crowds of civilians. The Berlin attacker even perpetrated an explicit disruption of the supply chain before the attack by hijacking a Polish tractor-trailer carrying a shipment of steel beams. ISIS-linked plots involving similar timing and tactics are likely to continue challenging European security into 2017.

In Turkey, a faction within the military launched a failed coup against the reigning Justice and Development Party (AKP) government, leading to significant security and business continuity impacts in the short and long terms. The Turkish government's response to the coup attempt has exacerbated security and business continuity threats in the country. Days after the coup, the government began widespread purges of numerous government departments and agencies across virtually every ministry, as well as the military, police, and intelligence services. There have been 100,000+ officials removed from public duty, 70,000 investigated and 32,000 arrested in total.

Supply chains in the Americas faced a wide range of risks related to security, corporate social responsibility, and business continuity in 2016. Cargo theft remains a main concern for the Americas with the most dramatic increase in cargo theft rates in Rio de Janeiro last year. Already the second largest hotspot for cargo theft in the country, officials in Rio de Janeiro reported a total of 9,870 cargo theft incidents in 2016, 36% more incidents than those recorded in the state in 2015. The year-over-year increase in cargo theft incidents in both Rio de Janeiro and Sao Paulo, combined with minimal efforts to curb the rate of theft, suggests that Brazil could see another year of increased cargo theft in 2017.

BSI also recorded varying degrees of improvement in corporate social responsibility protections in Latin America in 2016. The BSI SCREEN Intelligence Team reduced the rating for the threat of child labour in both Ecuador and Panama due to each country's sustained efforts to drastically eliminate the problem. In Ecuador, the government reduced the rate of children working in the country from the 16% recorded in 2007 to now less than 3%, with Panama succeeding in reducing the rate of child labour in the country to about 4%, a number that represents a 50% reduction since 2012. Although most countries in Latin America improved upon their corporate social responsibility record, some nations, particularly Peru, failed to make much headway last year.

In 2017, BSI expects continued threats of cargo theft and drug smuggling in the Americas and Europe, protests over wage and other labour issues across Asia, and persistent risks of terrorism, including terrorist targeting of the supply chain. New initiatives to address security, social responsibility, and continuity risks in many regions will require close monitoring to assess their effectiveness at the ground-level.

Okay, I’ll apologize right away to the IT ops teams that are already security-savvy. Hats off to you. But I suspect there are still a few that leave security to the CISO’s team.

On Friday, May 12, 2017, evil forces launched a ransomware pandemic, like a defibrillator blasting security into the heart of IT operations. What protected some systems? It wasn’t an esoteric fancy-pants security tool that made some organizations safe; it was simple e-hygiene: Keep your operating systems current. Whose job is that? IT operations’. Had the victims kept up with OS versions and patches, they wouldn’t have been working over the weekend to claw back from disaster. What’s the path to quick restoration? Having a safe offline backup. Whose job is that? IT operations’. The WannaCry ransomware outbreak is a brutal reminder that IT operations plays a critical role (or not!) in protecting the business from villains.



As the global WannaCry ransomware attack began spreading to computer systems around the world on May 12, Microsoft president Brad Smith quickly responded by publicly blaming part of the problem on businesses which don’t keep up with critical security patches, leaving their systems vulnerable to attackers.

Smith’s comments came in response to critics who had blamed Microsoft for leaving systems vulnerable in the first place by not doing enough sooner to assist customers and for ending security patches for older operating systems such as Windows XP and Windows Server 2003. Many enterprises, including hospitals and a wide range of businesses, still rely on systems running older operating systems or embedded operating systems, leaving them open to hackers and ransom attacks.

The problem with that argument, according to several industry analysts who spoke with ITPro, is that Smith and Microsoft are right this time to criticize IT administrators and their companies that are failing to keep their systems patched and updated.



The Business Continuity Institute

Switzerland, Luxembourg and Sweden are the three countries most resilient to the pressures of the 21st century according to the  2017 FM Global Resilience Index, with Nepal, Venezuela and Haiti making up the bottom three on their list. The study, which ranks 130 countries and territories by their enterprise resilience to disruptive events, also highlighted that the most pressing risks to business performance are cyber attack, natural hazards and supply chain failure.

For organizations concerned by the increasing incidence of cyber attack, oil-rich Saudi Arabia has emerged as a country with above-average inherent cyber risk. Its high internet penetration, combined with a limited cyber security industry, make it a more vulnerable target. Developing India, by contrast, with its growing information technology industry, emerges as a country with below-average inherent cyber risk.

For organizations aware of the heavy toll of natural disasters, Sweden has above-average resilience due, in part, to its lower-than-average exposure to hazards such as windstorms, flood and earthquakes. On the other hand, flood-prone Bangladesh, a major manufacturing hub for apparel and textiles, ranks toward the bottom of the index.

For organizations with global supply chains, Germany, a major exporter and importer, ranks near the top in resilience, driven in part by its strong ability to demonstrate where parts, components or products are in transit. Russia ranks below average in this respect.

The index also ranked countries in terms of overall enterprise resilience with wealthy Switzerland occupying the number-one spot. This reflects high scores for its infrastructure, local supplier quality, political stability, control of corruption and economic productivity. Hurricane-ravaged Haiti ranks at the bottom of the index due in part to its high natural hazard exposure and poor economic conditions.

Many of the insights originate from three new resilience drivers added to the index this year. Inherent cyber risk reflects a country’s vulnerability to a cyber attack and its ability to recover; urbanization rate serves as a proxy for stress (on water supplies, power grids and other infrastructure) that would be exacerbated by natural disasters such as windstorms, flood and earthquakes; and supply chain visibility – reflects the ability to track and trace consignments across a country’s supply chain.

Other drivers of resilience that form the index include: productivity, political risk, oil intensity, exposure to natural hazard, natural hazard risk quality, fire risk quality, control of corruption, quality of infrastructure and quality of local suppliers.

“Our clients have found the index valuable when making important decisions about their properties, business strategies and supply chains,” said Bret Ahnell, executive vice president at FM Global. “We upgraded the index this year to reflect escalating threats that can make a lasting impact on business performance. FM Global will continue to improve the index and make the data publicly available to any business, client or not.”

Digital transformation is a must in today’s competitive landscape, radically speeding the pace of operations and increasing the demands placed on businesses to deliver new experiences. Businesses that embrace digital transformation will capitalize on this disruption to become industry leaders. It’s a reality that rewards swiftness and agility.

But speed is nothing without control. Without proper controls, moving faster may simply mean developers are releasing security vulnerabilities faster, exposing their organizations and customers to greater risk. The increasing pace of rapid innovation isn’t going to slow down. Organizations have to master shipping software faster, with higher efficiency and lower risk. The primary defense to ensure safety and speed work together is how to test for compliance through Agile, Lean and DevOps (ALDO) principles.



Friday, 19 May 2017 16:31

The Case for Compliance Automation

“Go all-flash, young man,” appears to be the current mantra of the storage industry. Vendor after vendor is urging enterprises to ditch hard disk drives (HDDs) in favor of solid state drives (SSDs). You also hear about flash-first strategies that urge organizations to look at flash storage first, last and always.

While this approach makes sense in many cases, what about existing storage assets? And in particular, are there any times when all-flash storage just doesn't make sense?

Here are some possible situations where it might be wise to skip the flash:



According to certain industry analysts and software vendors, we are now midway between a stage 10 years ago when few applications used machine learning, and a stage 10 years into the future when apparently, most applications will function with it.

The Gartner “Hype Cycle” shows machine learning due to become mainstream in software in about four or five years’ time. In that case, business continuity, like any other area of business activity, is likely to be affected. The time to start thinking about it may well be now. But what is machine learning, and how might it influence BC planning and management?

Simply put, machine learning is the capability of systems to construct models from data, without the intervention of human beings. Examples of machine learning today include self-driving vehicles and fraud detection, both of which reflect aspects of business continuity. Machine learning can be done in two ways.



Recent incidents remind us that knowledge is power. Earlier this week, US President Trump shared classified information with foreign delegates — and by doing so, he potentially declassified it. When The Washington Post exposed the headline first, the article became the most viewed digital news story in the publication’s history. This comes only a few days after a sweep of global cyberattacks locked major corporations and governments out of their data and threatened to release stolen content (like a soon-to-be-released Disney film) in increments. These stories remind us that those who own and control information wield power — but also that the boundary between public and private information is becoming easier to transgress.  

Organizations and regulators are not the only ones contending with the power play of public and private information. Consumers are also becoming empowered as their knowledge of institutions’ inner workings and related data risks grows. As a result, consumers are striving to control their personal information. Forrester’s Consumer Technographics® data reveals that consumers around the world are motivated to manage their data, and those in the US and UK are especially conscious:



(TNS) - More than 140 courthouses across California are seismically unsafe, a study commissioned by state officials determined, and fixing just the worst dozen would cost more than $300 million.

In a serious earthquake, 145 courthouses could face “substantial” structural damage, “extensive” non-structural damage and “substantial” risk to the life of those in the buildings, says the study, presented Wednesday to a committee with the Judicial Council, which sets policy for California courts.

Glendale Superior and Municipal Courthouse received a seismic risk rating of 44.2, the highest in the state and among a dozen facilities considered very high risk. The report used seismic-risk ratings developed by the Federal Emergency Management Agency, or FEMA.



In light of the very recent WannaCry ransomware cyber attack that has impacted more than 230,000 victims in over 150 countries since it began last week, it is more important now than ever to be thinking about your organization’s business resiliency, specifically your business continuity plan and IT disaster recovery plan. Should your organization experience any type of business disruption—such as a cyberattack—the best defense is having not only a plan, but also a crisis communications platform that will aid in the management of such an event.

Business Continuity Awareness Week 2017

Given the recent cyber attack, it’s perfect timing for Business Continuity Awareness Week (BCAW) which is happening now—May 15-19, 2017, and this year’s theme is dedicated to Cyber Security.

This annual global event is facilitated by the Business Continuity Institute (BCI). The purpose of BCAW is to provide a vehicle to raise the awareness of and to showcase the value of Business Continuity Management as an integrated part of an organization’s strategy.

BCAW opens up the doors to anyone who wants to find out more about what business continuity is all about and how it might benefit their own organization. The BCI educates organizations on the importance of business continuity planning by sharing experiences, knowledge, and best practices. This year they are focused on “Building Resilience by Improving Cyber Security.”



The Business Continuity Institute

Quite often with cyber security, the public sees what might appear to be a game of cat and mouse: the perpetrators (bad guys) attack, then the cyber security establishment (government, private companies, and so on; the good guys) defend and try to plug, patch, and repair the problem after the fact. What we are missing in this picture—what may not be reported, or underreported - is how many companies and organizations are unaffected, as well as those who may have been impacted but are hesitant to admit this and risk bad publicity.

The latest example of this is the WannaCry attack, which now looks like it came from the North Korean-affiliated Lazarus group. This attack would have been defeated if organizations simply allowed computers running Microsoft-based operating systems to install the update that would have fixed the vulnerability. With personal computers, most users allow this to operate automatically, but with corporate computers this task is generally taken care of by an IT department that often runs several versions of Windows behind.

It is interesting that, according to reports, this ransomware attack - which claims to encrypt all of users’ files and offers a payment-based decryption service to restore them - has only generated $50,000 in ransom. However, it is our guess that this number is severely underreported; we have found few people like to admit to having been a victim of this kind of attack, just as users affected by Nigerian scams often deny being victims. It’s also interesting to speculate whether people will continue to pay any ransom given that, according to reports, no one who’s paid the ransom thus far has had their files decrypted.

How can organizations break this vicious cat-and-mouse cycle? One way to effectively build and maintain organizational resilience on an enterprise level is creating a cyber security program that repels and recovers from cyber attacks, following the Four Rs of Resilience: Robustness, Redundancy, Resourcefulness, and Rapidity. For our purposes with regards to WannaCry, let’s focus on just two factors: Robustness and Redundancy.

Robustness is the ability of systems and elements to withstand disaster forces without significant degradation or loss of performance. The simple fix here is making sure all operating systems are updated, including any systems by vendors, home systems that may be used (or prevented from accessing corporate systems) and tertiary systems an organization relies on. More sophisticated solutions such as software defined perimeter would also have prevented the attack, by establishing a dark layer and credentialing process, restricting access.

Redundancy is the extent to which systems and elements or other units are substitutable or capable of satisfying functional requirements, if significant degradation or loss of functionality occur. Regular backups would remove the concern about having data encrypted or destroyed as users could just retrieve the same data from their backup.

So in short, what’s the best way to keep your personal and organizational data safe in the age of WannaCry? It may seem simple, but it’s the most basic cyber security advice for a reason: update and backup your files. Frequently.

Andrew Boyarsky and Douglas Graham are the academic director of the master’s program in enterprise risk management at the Mordecai D. and Monique C. Katz School of Graduate and Professional Studies at Yeshiva University and an advisory council member at the Katz School, respectively. The opinions expressed above are solely those of the authors and should not be attributed to Yeshiva University.

The Business Continuity Institute

Last week's ransomware attack, which affected 200,000 computer systems in 150 countries and crippled hospitals across the United Kingdom, is a frightening reminder of how much damage can be done by this type of malicious cyber attack. However, a new survey reveals that most people are ill equipped to deal with such an attack.

“It is simply unacceptable that people do not get the care they need because of cyber criminals attacking hospitals. We have a shared responsibility to collaboratively get this under control,” says Kathy Brown, President and Chief Executive Officer of the Internet Society which helped to fund the survey. “Law enforcement, IT professionals, consumers, business, and the public sector all have responsibility to act to keep enabling the good that the internet brings.”

According to the joint CIGI, ISOC and UNCTAD Global Survey on Internet Security and Trust, conducted by global research company Ipsos, before the latest attack, 6% of internet users globally had already been personally affected by ransomware, with internet users in India, Indonesia, China and the United States the most likely to be affected. An additional 11% knew someone who has been hit by these malicious programmes.

"Cyber thieves now operate on a global scale, as the most recent attack illustrates, and just about anybody can launch a ransomware attack,” says Fen Osler Hampson, Distinguished Fellow and Director of Global Security at CIGI. “Ransomware attackers have discovered that they don't have to steal or destroy your data to enrich themselves, they just have to hold it hostage. Our survey data shows that many people are willing to pay to get their data back, which makes such attacks highly profitable."

People remain largely unprepared for this new form of cyber attack, which encrypts their data and renders it inaccessible until they pay a ransom. Nearly a quarter (24%) of people admit they would have no idea what to do if their computer were to be hit with ransomware.

Many would turn to the authorities with 22% contacting law enforcement, 15% contacting their Internet Service Provider and 9% contacting a private firm to try to retrieve their data. Unfortunately, the authorities are often unable to help. Once the data is locked, it is extraordinarily difficult to retrieve without either paying the ransom or restoring the files from a backup. Here again, internet users are woefully unprepared, as only 16% of people globally indicate that they would retrieve their data from a backup.

As individuals and as organizations, our data is important to us, and our time is important too. We do not want to lose either as it could be costly. We need to make sure that we have plans in place to be able to respond to such an attack and manage through any disruption that occurs as a result. Business continuity has played an important role is the response to this latest ransomware attack with many organizations invoking their plans and putting processes in place to ensure that it didn't turn into a crisis.

Organizations of all sizes need to develop a business continuity programme. If you haven't already done so, read the Good Practice Guidelines Lite Edition, which is free download published by the Business Continuity Institute that offers some basic guidance on the steps you will need to take.

In Oklahoma, for each barrel of oil extracted by energy companies, seven to 10 barrels of wastewater are produced. Oil and gas companies use a technique called ‘dewatering,’ which allows a cheap separation of oil and water, making old geologic formations economic. The water, which sits underground for millions of years getting saltier and nastier with the passage of time, must be disposed of safely. Oil companies send it to disposal wells where it is injected deep into the earth. This disposal process has been linked to an increase in earthquakes because the injected wastewater counteracts the natural frictional forces on underground faults and, in effect, “pries them apart”, thereby facilitating earthquakes. Because of wastewater disposal earthquakes on natural faults are occurring faster than they would have happened otherwise.

The spate of earthquakes in Oklahoma (Figure 1) over the past few years has driven earthquake insurance take-up rates in that state from 2 percent to 15 percent (higher than in California).  According to NAIC data from S&P Global Market Intelligence and the I.I.I., direct premiums written from earthquake insurance in Oklahoma increased by over 300 percent from 2006 to 2015 (Figure 2). The Oklahoma market has been declared noncompetitive as only four companies combine to write a 55 percent market share. The action gave the state Insurance Department the right to approve rate changes in advance. Some insurers suggested a better solution would be to encourage competition rather than increase regulation.



Cybersecurity is now a C-Suite concern. Major business disruptions, compromised customer data, bank heists and even state-sponsored hacks have prompted boards and CEOs to action.

In the past few years, organizations have been scrambling to recruit Chief Information Security Officers (CISOs). Accountability for cyber risk has risen as the number and magnitude of attacks has climbed well past the nuisance level into the sphere of major business risk.

How effective this management strategy will be in arresting the modern plague of cybercrime remains to be seen. A recently published global survey of C-Suite level executives and IT decision-makers1 (ITDMs) revealed a large gap in assessments of cyberthreats, costs and areas of responsibilities. Here are three of the most significant disconnects:



Why have some organizations been around for hundreds of years while others last only five minutes? The key is to create success that lasts. ISO 9004 gives guidance to help companies achieve “sustained success” and has just reached a crucial stage in its revision process.

ISO 9004, Quality management – Quality of an organization – Guidance to achieve sustained success, is currently under revision and has just reached Draft International Standard (DIS) stage, meaning that interested parties can submit feedback on the draft before its final publication in 2018.

The standard provides a framework based on a quality management approach, within which an organization can achieve ongoing success through identifying its strengths and weaknesses, and opportunities for improvements or change. It offers guidance for enhancing the overall quality of an organization by improving its maturity level, namely in terms of its strategy, leadership, resources and processes.



(TNS) - What would you do if…..

That was the question put before multiple county agencies and first responders on Saturday as part of a mock disaster drill held at the Meigs County Fairgrounds and Meigs High School.

“Drills like these are important to test area responders and build a better working relationship between agencies in the event of a real disaster,” stated Meigs County EMA Director Jamie Jones.

Arriving at the school on Saturday morning, first responders, actors and others were given information on the scenario for their role in the mock disaster.



BATON ROUGE, La. — Hurricane season officially begins June 1 and FEMA urges Louisianans to prepare. Getting ready now goes a long way in saving lives and reducing property damage later.

Readiness for the tropical season depends on preparing, planning and staying informed.


  • Update your disaster kit. Ready.gov recommends gathering a number of items such as: a three-day supply of non-perishable food and bottled water, a battery-operated radio, a flashlight, extra batteries, cash, medicines, a first aid kit, pet foods, and important family documents.
  • Cut down or trim damaged trees and limbs, clear out debris from pipes or culverts so that water doesn’t back up and cause flooding.
  • Tie down or take inside unattached outdoor toys and furniture when a severe storm approaches.


Be Informed:

  • Download the FEMA Mobile App for disaster-related information.
  • Listen to NOAA Weather Radio  or local radio or TV stations for up-to- date storm information, and be prepared to take action.
  • Search the internet or log on to Twitter with the name of your metropolitan area and the word “alerts” to be connected to the latest information.
  • Wait until local officials say it’s safe to return home before doing so.

Those who live in FEMA manufactured housing units should know this temporary housing does not provide safe shelter during a hurricane or tornado. Here are some tips for those who live in FEMA MHUs:

  • Leave an MHU when there are tornado or hurricane warnings.
  • All FEMA MHUs come equipped with weather radios; listen for storm warnings.
  • Put important items on high shelves in case of floods.

More information may be found online at www.fema.gov/pdf/areyouready/areyouready_full.pdf.




Louisiana Emergency Preparedness Guide

Get A Game Plan App 

Louisiana 2-1-1 provides information about available health and human services.
National Flood Insurance Program or call or call FEMA at 1-800-427-4661.

Thursday, 18 May 2017 15:51

FEMA: Prepare Now for Hurricane Season

The Business Continuity Institute

This news item contains embedded media. Open the news item in your browser to see the content.

The recent ransomware attacks affecting about 200,000 networks across 150 countries, including the NHS in the UK, is a stark reminder, as if one were needed, of just how great the cyber threat is.

Our modern world is heavily reliant on IT systems, and although these systems provide many benefits, they also have their pitfalls. Research conducted by the Business Continuity Institute presents the inevitability of an attack with the Cyber Resilience Report showing that two-thirds of organizations had experienced an incident during the previous year, and 10% had experienced at least ten.

Collectively we must do more to make our organizations more cyber secure. While there are mechanisms that organizations can put in place to improve cyber security, there are also steps that individuals can take. Several studies have shown that the insider threat is as much of a concern as external threats. This may not necessarily be down to malicious activity, or even negligence, sometimes it could just be down to a simple mistake.

In a new paper published by the BCI – Building resilience by improving cyber security – it is revealed that several activities which many of us perform out of habit could be making our organizations more vulnerable to the cyber threat, and identifies six simple steps that each of us can take to help improve security.

  1. Use strong passwords – A study showed that ‘123456’ was the most common password used among a given sample, and the rest of the top twenty weren’t much better. By using weak passwords it makes it far easier for intruders to gain access to our systems.
  2. Keep passwords safe – It’s all very well having a strong password, but if we’re writing that password down on a post-it note and leaving it next to our computer then we are leaving ourselves extremely vulnerable.
  3. Lock unattended computers – With studies showing the insider threat to be as much of a concern as external threat, we need to be more careful of who has access to our computers.
  4. Be cautious of public Wi-Fi – By accessing a public network, you are also potentially allowing that network, and anyone on it, access to your computer. If you are on public Wi-Fi, use a VPN to help improve security, and don’t share sensitive information.
  5. Don’t plug in untrusted devices – The report revealed that even devices from reputable sources can contain malware, so never plug an untrusted device into your computer.
  6. Don’t click on unknown links – Many attacks such as ransomware take place because users have clicked on a link they shouldn’t have and invited the intruders in. We must develop a culture whereby we think twice before clicking on links, however enticing they may appear.

Download your free copy of 'Building resilience by improving cyber security' by clicking here.

The UK may not be hit by monsoons, but it has had its share of overflowing rivers and torrential rain wreaking havoc on British homes over the last decade.

It’s particularly England and Wales that have suffered from flooding issues; Hull in 2007, Cumbria in 2009 and many UK areas in the 2013/14 winter. The Environment Agency estimate that five million Brits actually live or work in flood danger zones.

Needless to say, if your home is listed as a flood risk, it’s important to protect the property as much as you can from any potential dangers. You should also be sure to have adequate home insurance in the event your property is affected by flooding. It’s also worth knowing a little about Flood Re, a collaborative project between the Government and insurance companies. This scheme, launching during 2015, will ensure home insurance is available and affordable for properties at high risk of flooding.

With that said, no insurance can cover you protect you from the disruption and emotional trauma caused from flooding in your home or business. What’s more, many people seem unsure how best to protect their properties. What action can you take to minimise the impact of flooding on your property?



(TNS) - Boise River Flood District No. 10 might spend the next three years pulling out all of the trees that have fallen into the river this spring or are leaning so badly they soon will, said Bill Clayton, chairman of the flood control district’s board of commissioners.

That’s just for the area inside the district’s boundaries, which stretch from the Plantation Golf Course near State Street in Boise to just east of the Interstate 84 bridge over the river in Caldwell, Clayton said.

As federal water managers prepare to raise the Boise River to its highest flows since 1983 Tuesday, there are likely more trees, debris and other challenges ahead for the district’s crews to handle this spring. Clayton guessed crews will find hundreds of downed or compromised trees by the time the flooding stops.



The web offers a lot of opportunities, but with them, threats are sure to follow. The faster the technological advance, the more security gaps appear. Just in the last year, we’ve seen an unprecedented increase in the number of denial of service (DoS) attacks. DoS attacks account for more than 55% of all annual cyber crime and are the most costly cyber crimes. These attacks specifically target the vulnerabilities of hosting, nameserver, and IT infrastructures.

Denial of service renders a website or system unavailable to users, and a successful one can hit an entire online user database. That’s why DoS awareness and protection is critical to any cyber security plan.

What Is a DoS Attack?

In short, the purpose of a DoS attack is to make a host, device or environment unavailable for its intended purpose. The attacker typically causes the disruption by flooding the device with excessive requests, overloading the device and preventing the fulfillment of legitimate requests. Think of when a website is extremely slow due to increased traffic. DoS attacks simulate increased traffic through automated processes.

A cyber criminal often uses a DoS attack to take down websites, but they can also cause disruption on any application environment in order to prevent business functions from operating normally.



Wednesday, 17 May 2017 15:10

What is a DoS Attack?

The Business Continuity Institute

As business enters the digital age, cyber resilience must become a regular agenda item for boards and excos. Given the extent of the cyber risks companies face, and their extreme reliance on ICT, cyber security is only a partial answer. Nobody can identify and prepare for all the risks that threaten ICT systems, so it is essential that security and risk mitigation measures are part of a wider programme to ensure that the organisation can detect a cyber attack, respond appropriately and recover operational functionality.

There are signs however that the C-suite may not yet have come to grips with the nature of the challenge posed by the digitalisation of business, and thus the extreme need to look beyond cyber security.

Research from a leading consulting firm has shown that CEOs, CIOs and Chief Information Security Officers (CISO) alike remain confident about their cyber security measures - while security breaches are quite high. This misplaced confidence is surely one of the primary contributory causes for the belief that, at present, the bad guys appear to have the upper hand.

Despite financial service respondents admitting the number of detected incidents remaining relatively unchanged from 2013, last year saw that a 154% increase was evident in the number of detected security incidents against retail and consumer products companies, with the number of e-mail compromises and ransomware threats a growing risk, and phishing at the top of the log of these concerns. So much so, that research has shown security investments increased 11% in the last year, and 41% of these companies aim to address these concerns by increasing their budgets respectively. CISO’s roles are increasingly growing to become pertinent to Boards directly, as a matter of urgency to address the reality of cyber related incidents.

Regulatory authorities are far from unanimous about how data ought to be protected, as the current roll-back of existing US data privacy regulations by the Trump administration shows. These kinds of regulatory gaps offer unscrupulous operators plenty of opportunity.

The growing use of accelerometers on mobile devices to report on physical activity as part of health/ wellness programmes shows just how new threats are manifesting all the time. These and similar apps are insecure, and can allow hackers to 'eavesdrop' on keystrokes, and so access passwords and other sensitive information. The same vulnerability is multiplied across industrial systems as the Internet of things takes hold, and insecure sensors and similar devices proliferate. A hacker could thus use a sensor tracking the flow of chemicals or fuel to shut a plant down, dramatically affecting whole value chains or, in the case of a power utility, the national economy.

We must accept that event and technology based security is no longer adequate to protect the organisation’s very ability to function. Organisations must begin taking proactive action to subsume cyber security into the broader, strategic initiative of cyber resilience.

Cyber threats cannot be considered and provided for in isolation; they must be integrated into business and organisational strategic thinking, and specifically into the business continuity management lifecycle. In so doing, the organisation will move away from a compliance mindset, becoming better able to identify cyber risks and recover from cyber incidents. In other words, becoming a cyber resilient organisation. To achieve this, cyber resilience needs to be integrated into the very corporate culture. It must form part of existing policies, rather than a silo of new ones; very critically, a cyber recovery plan must be part of the overall recovery plan.

The end goal should be that the organisation have processes and procedures in place to identify the risks it faces, mitigate them and recover from the materialisation of any risk. Focusing on specific responses to specific threats becomes counterproductive when the risk is multiplying so rapidly.

Business Continuity Awareness Week (BCAW2017) [15-19 May] this year explores the issue of cyber resilience. Find out more about the series of webinars designed to explore this critical subject.

Karen Humphris CBCI is the Senior Manager Advisory Services at ContinuitySA, and Alex Ferguson is an Intern at ContinuitySA.

The massive cyberattack targeting computer systems of businesses, government agencies and citizens in more than 150 countries is now being linked to the North Korean government. Called WannaCry, the ransomware encrypts the victim’s hard drive and demands a ransom to be paid in the virtual currency bitcoin equivalency of about $300.

According to the Washington Post:

Several security researchers studying “WannaCry” on Monday found evidence of possible connections to, for instance, the crippling hack on Sony Pictures Entertainment in 2014 attributed by the U.S. government to North Korea. That hack occurred in the weeks before Sony released a satiric movie about a plot to kill North Korean leader Kim Jong Un.

The New York Times reported that the malicious software was transmitted via email and stolen from the National Security Agency. It targeted vulnerabilities in Windows systems in one of the largest ransomware attacks on record. The virus took advantage of a weakness in Microsoft’s Windows operating system. Although the flaw was patched by the company, not all users had applied the update.Institutions and government agencies affected included the Russian Interior Ministry, FedEx in the United States and Britain’s National Health Service.



Tech buying in business and governments is clearly shifting from the sole or primary control of the CIO and the tech management organization and into the hands of business leaders.  But how much is this happening? Anecdotal comments and surveys – including Forrester’s own Business Technographics surveys – suggest that most tech purchases are now controlled by business executives.  However, in our just-published report, “C-Suite Tech Purchasing Patterns,” Forrester’s analysis shows that the shift of tech buying from the CIO to business executives is much less dramatic, with just 5% of all new tech purchases fully controlled by business by 2018.  Moreover, this shift varies dramatically by C-level executive. CMOs and eCommerce heads have the highest proportion of new project spending under their control, but CFOs, COOs, supply chain heads, and heads of customer service are much less likely to go it on their own.

The big issue in making statements about who is buying technology is the fundamental difference between how consumers buy technology and how businesses and governments buy tech.  In business and government, there is seldom one person who makes the decision to buy a piece of technology.  Instead, there is a complex process of identifying a business need, finding and choosing a vendor with the right technology solution, implementing that solution, and making sure it is working well.  Different stakeholders will be involved in each stage of this process.  The growing tech-saaviness of business leaders and the wider availability of cloud solutions does mean that business leaders are playing a bigger role in the front end of this process. But the persistence of licensed software, the growing adoption of cloud as a replacement for licensed software, and challenges of implementing and optimizing solutions mean that CIOs and tech management teams still play a dominant role in overall tech purchases by businesses and governments.

Key findings of Forrester’s analysis of data on actual tech purchases:



During a keynote at his company’s big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker’s latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them.

“Machine Learning is one of the most important computer revolutions ever,” Huang said. “The number of [research] papers in Deep Learning is just absolutely explosive.” (Deep Learning is a class of Machine Learning algorithms where innovation has skyrocketed in recent years.) “There’s no way to keep up. There is now 10 times as much investment in AI companies since 10 years ago. There’s no question we’re seeing explosive growth.”



“When life gives you lemons, make lemonade,” goes the popular saying, which inspires us to tackle life’s challenges in a positive way to help us grow and learn from hardships. For organizations struggling to meet the upcoming GDPR compliance deadline in May 2018, it may be difficult to view the massive data privacy compliance project as a positive, a piece of investment that can change the way an organization stores and handles user data for the better.

But how can an organization successfully turn GDPR “lemons” into lemonade? By using this time to solidify its overall compliance strategy, an organization can get a return on its GDPR compliance investment. Below is a quick summary of the payoff an organization can potentially see from implementing a comprehensive GDPR strategy:



Tuesday, 16 May 2017 15:43

Turn GDPR Compliance into Lemonade

The National Emergency Number Association (NENA) said that in light of the recent ransomware attack that hit both private- and public-sector entities in multiple countries, it was not aware of any attacks on public safety answering points (PSAPs) or 911 service.

It said it was issuing a special alert to help its members defend against any attacks that may occur, according to a news release.

The so-called “WannaCry” attack leveraged recently released vulnerabilities and exploit techniques to take control of Windows-based computers. The attack software infects vulnerable machines and demands $300 or more in bitcoin. Victims that don’t pay are threatened with deletion of the encryption key, and that renders their data irretrievable.



Data is the perimeter, defend it that way

Unless you have been living under a rock or possibly hiding in the mountains of Montana with a giant beard and eating way too many government issued MRE’s you probably heard about the nuclear bomb of a ransomware attack that kicked off last week.  Welcome to the post apocalypse folks.  For years, many of us in the cybersecurity industry have been jumping up and down on desks and trying to get the world (writ large) to pay attention to managing and patching outdated systems and operating systems that have been running legacy software, to no avail.  Now that Pandora’s box has been opened and the bad guys have use the NSA leaked tools as weapons platforms all the sudden everyone gives a dang.  I caught no less than 17 talking heads on the news this morning stating that “this is the new reality”, and “cybercrime is a serious threat to our way of life.”  Duh, also water is wet and fire is hot.  Thank you news.  

Regardless of all the bad that is bouncing around the news and everywhere else today (and as I type this I can literally see a pew pew map on CNN that looks like a Zika Virus map showing the spread of WannaCry dominating the screen behind the anchor team) the reality around this “massive hack” and “global attack” is that if folks didn’t suck at patching their systems and followed basic best practices instead of crossing their fingers and hoping that they didn’t get hit the “end of days malware” would be basically ineffective.  The “hack” targets Windows XP systems, an old, outdated, unsupported OS that should have been pulled from use eons ago.  And if the legacy system running that OS couldn’t be pulled, IT SHOULD HAVE AT LEAST BEEN PATCHED.  Problem solved, or at least made manageable.



The Business Continuity Institute

“Maybe you are busy looking for a way to recover your files but do not waste your time.
Nobody can recovery your files without our decryption service.”

This is what users infected by the WannaCry virus read on their screens having accidentally let the malware in. Unfortunately, the criminals were not lying in this case, as most businesses are not equipped to decrypt their files following an attack like this within an acceptable timeframe. Some might be able to recover their files, especially when the malicious code is not too sophisticated, but it is likely that it will take a long time and thus incur significant financial losses to do so. Dealing with an infection once it happens can be painful; however, the good news is that by following the right guidelines it is possible to drastically reduce the chances of that happening.

At this regard, it is interesting to look at the threat, in order to better understand the response. According to Kaspersky lab, WannaCry is an encryption programme that uses an exploit, which is a piece of software that takes advantage of the weaknesses in an operating system (in this case Windows) in order to install malware. The main ways to bring the exploit into a computer include clicking on the wrong link or downloading a malicious attachment from an untrusted source. Once the malware is into the system, it encrypts all or part of its data and asks the victims to pay a ransom in bitcoins. If they do not pay within a few days, they can forget about all the hard work and long hours they spent in front of that machine and sadly they can start counting their losses.

The case of WannaCry shows once again how the weakest link in a computer system is the human operating on it. There is no firewall that will protect a computer from an employee clicking on the wrong link thinking it’s just another invoice. Industry research shows that the vast majority of ransomware is delivered through phishing and social engineering attacks, revealing the need for better education and awareness-raising programmes. Information security experts are doing an excellent job in designing the right technical solutions against cyber criminals, yet they might be struggling to deal with the human aspect.

In this respect, business continuity (BC) professionals can provide a great deal of help, as their job is to know a business from top to bottom, understand its weaknesses, and make sure everyone is aware of their role when preparing for a crisis. Continuity and recovery tactics place a big emphasis on resources, such as IT and information equipment, also taking into account people, premises, and suppliers. The strategies adopted for recovery by BC professionals include replication, which means being able to recreate the necessary conditions to keep the business running while the main site is not operational. Thus, a BC professional will always make sure business-critical resources such as data are backed up, in case something (such as ransomware) makes them suddenly unavailable. Backing up files is the most effective and quickest solution to get up and running after being hit, and it is sometimes neglected as a practice due to a lack of threat awareness, rather than technical ability. BC professionals will know how to embed a strong safety culture among staff members, having experience in managing awareness campaigns. This can go a long way when trying to educate employees on how to avoid falling for phishing or social engineering attacks. After all, organizations are already starting to move in this direction. According to a BCI survey, 75% of the respondents had business continuity arrangements in place to deal with cyber disruptions.

The recent attack presents a great opportunity for organizations to improve their response and make lasting changes to become more cyber resilient. In the next few weeks, 'ransomware', 'back-up' and 'disaster recovery' will probably be the buzzwords of the moment, but the real challenge will be not to forget about them in the long term. Business continuity professionals have been advocating for better arrangements to prevent disruptions of this kind for a long time, and they will keep doing so. Thus, if you’re looking for someone to thank for implementing the right measures the next time ransomware strikes, business continuity professionals are likely to be the right choice for your business.

Gianluca Riglietti CBCI is currently a Research Assistant at the Business Continuity Institute, where he provides support in managing publications and global thought leadership initiatives. He graduated at King’s College London in 2015, completing a Master’s in Geopolitics, Territory and Security.

The Business Continuity Institute

Despite the Wanna Decryptor ransomware attack affecting a reported 200,000 systems across over 150 countries, and despite the tales of disaster we are reading about in the media, the encouraging news from many organizations is that their business continuity process is preventing a disruption from turning into a crisis.

As a result of this latest attack, organizations across the world, including many NHS Trusts in the UK, have been invoking their business continuity procedures, ensuring that the priority activities are carried out, an appropriate level of service is provided to customers and any damage to reputation is limited.

“With a major incident now declared by NHS England, it is evident just how disruptive cyber attacks such as ransomware can be,” said David Thorp, Executive Director of the Business Continuity Institute. “Organizations must have mechanisms in place so they are prepared to deal with the consequences of a cyber security incident, or in fact any other type of incident, and can continue as near ‘normal’ operation as possible, while maintaining the confidence of their stakeholders.”

The modern business environment is heavily reliant on IT systems, and although these systems provide many benefits, they also have their pitfalls, which stem from this reliance. Research conducted by the Business Continuity Institute presents the inevitability of an attack with the Cyber Resilience Report showing that two-thirds of organizations had experienced an incident during the previous year, and 10% had experienced at least ten.

The dramatic effects of an attack such as last Friday’s should not be underestimated, yet organizations, such as the NHS, have managed to keep operating under attack. All Trusts are required to have in place an effective business continuity plan, and it is testament to the effectiveness of this planning that disruption has not been more severe.

All businesses can develop similar levels of resilience. It is business continuity that makes an immediate difference during any kind of emergency, crisis or disruption. It is what makes an organization resilient, ready to respond and carry on, even amid difficult circumstances. Yet business continuity cannot be improvised. It requires specialised and trained staff as well as the support of everyone within an organization – from executive management to junior staff.

David Thorp added: “The Business Continuity Institute has a range of free resources, via our website, that can be accessed by businesses and other organizations that need to avoid damaging disruptions to their activities. If prevention fails it is essential that smooth operations are maintained.”

Founded in 1994 with the aim of promoting a more resilient world, the Business Continuity Institute (BCI) has established itself as the world’s leading Institute for business continuity and resilience. The BCI has become the membership and certifying organization of choice for business continuity and resilience professionals globally with over 8,000 members in more than 100 countries, working in an estimated 3,000 organizations in the private, public and third sectors.

The vast experience of the Institute’s broad membership and partner network is built into its world class education, continuing professional development and networking activities. Every year, more than 1,500 people choose BCI training, with options ranging from short awareness raising tools to a full academic qualification, available online and in a classroom. The Institute stands for excellence in the resilience profession and its globally recognised Certified grades provide assurance of technical and professional competency. The BCI offers a wide range of resources for professionals seeking to raise their organization’s level of resilience, and its extensive thought leadership and research programme helps drive the industry forward. With approximately 120 Partners worldwide, the BCI Partnership offers organizations the opportunity to work with the BCI in promoting best practice in business continuity and resilience.

The BCI welcomes everyone with an interest in building resilient organizations from newcomers, experienced professionals and organizations. Further information about the BCI is available at www.thebci.org.

The Internet of Things (IoT) is rapidly expanding. Our homes, cars and workplaces are filling with connected devices designed to cater to our personalized needs. They respond to our instructions, whether delivered through a mobile app or a spoken command, and they collect data about our activities in order to better anticipate our needs. All of this data collection creates a digital trail of consumers’ lives, which becomes richer and more detailed as multiple sources of data are combined. Big data analytics offer seemingly endless opportunities to use and commercialize this data in new ways.

Yet unanticipated uses and disclosures of user data may compromise consumer privacy and even undermine consumer trust. As a result, companies will need to pay increasing attention to privacy compliance in the IoT space as courts and regulators focus on issues such as notice, choice and security.

A recent FTC settlement with the smart TV manufacturer Vizio, Inc. highlights several key privacy compliance challenges facing companies in the IoT space. In the settlement, which included a hefty payment of $1.5 million, the FTC reiterated its position that collecting and using information in ways that surprise consumers — such as Vizio’s collection and sharing of consumers’ television viewing activity via its connected televisions — requires “just-in-time” notice and choice. In addition, the FTC expanded its view of what constitutes sensitive personal information to include consumers’ television viewing activity, an indication that regulators are willing to look beyond traditional concepts of personal information as they evaluate new types of data collected by connected devices.



Monday, 15 May 2017 14:26

Compliance in a Connected World

If you’re a marketer struggling to decipher the complicated marketing technology landscape of more than 5,000 vendors – and show me a marketer who isn’t – then I have some good news for you. It won’t be as easy as following the yellow brick road, but you can begin to make sense of today’s seemingly infinite array of enterprise marketing technology (EMT) offerings.

Two of my research areas at Forrester are Cross-Channel Campaign Management (CCCM) and Real-Time Interaction Management (RTIM). I field myriad inquiries on both, as they are critical, confusing, and conflated in terms of technology and vendor overlap. While CCCM primarily focuses on automating marketing-driven campaign strategies for outbound channels, and RTIM primarily focuses on next-best-action strategies for customer-initiated interactions via inbound channels, both rely heavily on systems of insight (customer data and analytics) and systems of engagement (automated content and interactions). And both cover multiple inbound, outbound, digital, and offline channels.

CCCM is evolving as marketers strive to align highly personalized marketing campaigns with customer-initiated interactions to drive deeper levels of engagement throughout the customer life cycle. I addressed this evolution in The Forrester Wave™: Cross-Channel Campaign Management, Q2 2016, which featured 15 leading vendors. Since the CCCM space is much broader, earlier this year I also published the Vendor Landscape: Cross-Channel Campaign Management, and it adds a further 32 vendors to the mix, categorizing them as enterprise, small, or regional players, and reviewing capabilities such as vertical expertise or content management.



Did you watch the Snowden video? Life (and our work) would be more predictable if everything existed in the Ordered Domain – but in the real world it doesn’t.

Our organisations, especially when viewed in the contemporary risk/threat/vulnerability environment are complex adaptive systems. When we promote simple (which often become simplistic) solutions we are essentially doomed to fail.

I guess that brings me to the most recent contribution to the debate. Charlie Maclean Bristol’s “Revamping the business continuity profession” published in April 2017.

The Oxford Dictionary tells us that “revamp” (with an object attached) is a verb and it means

“Give new and improved form, structure or appearance to”.

Let see which of these elements are applicable here.

The starting premise is that the discipline has lost it’s “mojo” in recent years. If you are not familiar with the term beyond Austin Power’s losing his, it would imply that BC has either lost its voodoo charm bag, its libido or run out of morphine.



The components of the global cyberattack that seized hundreds of thousands of computer systems last week may be more complex than originally believed, a Trump administration official said Sunday, and experts warned that the effects of the malicious software could linger for some time.

As a new workweek started Monday in Asia, there were concerns the malicious software could spread further and in different forms, with new types of ransomware afflicting computers around the globe.

There were initial reports of new cases found over the weekend in Japan, South Korea and Taiwan.

President Trump has ordered his homeland security adviser, Thomas P. Bossert, who has a background in cyberissues, to coordinate the government’s response to the spread of the malware and help organize the search for who was responsible, an administration official said Sunday.



The Business Continuity Institute

NHS services across England have been hit by an IT failure caused by a significant cyber attack, with Trusts and hospitals in London, Blackburn, Nottingham, Cumbria and Hertfordshire all affected. Some GP surgeries have shut down their phone and IT systems while Accident and Emergency Departments have told people not to attend unless it is a real emergency.

NHS Digital said in a statement that a number of NHS organizations have been affected by a ransomware attack, believed to be the malware variant Wanna Decryptor, but it was not specifically targeted at the NHS and is affecting organizations from across a range of sectors.

At this stage there is no evidence that patient data has been accessed. NHS Digital say they are working closely with the National Cyber Security Centre, the Department of Health and NHS England to support affected organizations and ensure patient safety is protected. The focus is on supporting organizations to manage the incident swiftly and decisively.

Ransomware attacks are becoming more and more commonplace with public sector organizations arguably receiving an unfair proportion of the attacks due to a perceived, or perhaps even an actual, weakness in their cyber defences. Threats to our organizations in the cyber world can be just as disruptive as any physical event. With healthcare providers across the country having to cancel services, it is clear that this is an alarming situation for the NHS.

“It doesn’t matter where the threat comes from, organizations must have plans in place to deal with the consequence of disruptive events” said David Thorp, Executive Director of the Business Continuity Institute. “By putting plans in place to deal with such events, it means that organizations are better prepared to manage through them, lessen the potential impact, and still provide an appropriate level of service to their customers.”

So how do organizations prepare for a possible ransomware attack? First and foremost, they must make sure that their data is backed-up. If it data is backed-up and the organization experiences a ransomware attack then they can isolate the ransomware, clean the network of it, and then restore the data from the back-up. It’s not necessarily an easy process, but it means they don’t lose all their data and they don’t pay a ransom.

Make sure the operating system and installed software are up to date with the latest security patches, and that anti-virus and anti-malware tools are conducting regular scans of the network so they can pick up anything malicious before damage can be done. Configure access controls to the file directory so users can only access the files they need. The more restricted the flow of data is across the network, the better chance there is of stemming the spread of a ransomware attack.

They do say that prevention is better than cure, so one way to reduce the impact of ransomware is to stop it happening in the first place. The vast majority of the time, the user has to do something to install the software – click on a link, open an attachment – so if the user doesn’t do that, then the software can’t install. It may not be quite as simple as that, but it is important to develop a culture whereby users think twice about their actions.

With Business Continuity Awareness Week taking place next week, and event themed around cyber security and the need for organizations to make sure they prepared for disruptive events in the cyber world, the Business Continuity Institute is calling on all organizations to make sure they have plans in place to deal with such events so that disruptions don’t turn into disasters.

U.S. government agencies would need to increase the annual salaries of information security personnel by approximately $7,000 to equal the annual salaries of their private sector counterparts, a recent survey of 2,620 U.S. Department of Defense, federal civilian and federal contractor employees found.

The survey [PDF], sponsored by (ISC)2, Booz Allen Hamilton and Alta Associates, also found that 87 percent of respondents said hiring and retaining qualified information security professionals is key to securing an organization's infrastructure.

"It's crystal clear that the government must enhance its benefits offering to attract future hires and retain existing personnel given its fierce competition with the private sector for skilled workers and the unprecedented demand; unfortunately, the layers of complexity involved in fulfilling that goal are significant," (ISC)2 managing director Dan Waddell said in a statement.



With the Atlantic hurricane season’s official start on June 1, the time to check your buildings and existing contingency plans—or start a new one—is now, during hurricane preparedness week.

For 2017, Colorado State University’s hurricane research team predicts slightly below-average activity of hurricanes making landfall, with a forecast of 11 named storms, four hurricanes, and two major hurricanes.

The 2016 season is seen as a wakeup call, as 15 named storms and seven hurricanes formed in the Atlantic Basin—the largest number since 2012. Among the hurricanes was Matthew, a Category 4, which devastated Haiti, leaving 546 dead and hundreds of thousands in need of assistance. After being downgraded to a Category 2, Matthew pummeled southeast coastal regions of the U.S., with 43 deaths reported and widespread flooding in several states.



Friday, 12 May 2017 16:44

Make Your Hurricane Preparations Now

Small businesses are increasingly vulnerable to cyberattacks. A new website launched by the Federal Trade Commission (FTC) is aimed at helping small business owners be better prepared.

The site – ftc.gov/SmallBusiness – is a one-stop shop where small business owners can find information to protect themselves from scammers and hackers, as well as resources they can use if they are hit with a cyberattack.

Online FTC resources include a new Small Business Computer Security Basics guide with information to help companies protect their files and devices, train employees to think twice before sharing the business’s account information, and keep their wireless network protected, as well as how to respond to a data breach.



I recently heard a segment on WBUR (a public radio station in Boston) on the emergence of microgrids and I was amazed at how much the concept of microgrids closely aligned with the concept of microperimeters within our Zero Trust model of information security. Zero Trust is a conceptual and architectural model for how security teams should redesign networks into secure microperimeters, increase data security through obfuscation techniques, limit the risks associated with excessive user privileges, and dramatically improve security detection and response through analytics and automation. Zero Trust demands that security professionals move away from legacy, perimeter-centric models of information security - which are useless for today's digital businesses no longer bounded by the four walls of their corporation - to a model that is both data and identity centric and extends security across the entire business ecosystem.



The Business Continuity Institute

Gianna Detoni FBCI, from Panta Ray Consulting in Italy, being presented with her Industry Personality of the Year award by James McAlister FBCI, Chairman of the Business Continuity Institute

At an Awards Ceremony at the Principal Hotel in Edinburgh, Scotland last night, the Business Continuity Institute presented its annual European Awards to recognise the individuals and organizations who have excelled in the field of business continuity and resilience throughout the year.

The European Awards are one of seven regional awards hosted by the BCI each year, and culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. They are designed to recognise the individuals and organizations who have excelled in the field of business continuity and resilience throughout the year.

Business continuity is an established industry across the continent, so the standard of entries to the BCI European Awards is always incredibly high, and this year was no different, giving the judges some tough decisions to make. All those who were on the shortlist can take great pride in their achievement, however there can only be one winner in each category, and those celebrating on the night were:

Continuity and Resilience Consultant
Petra Morrison MBCI, Daisy Group

Continuity and Resilience Professional Private Sector
Rob van den Eijnden AMBCI, Philips

Continuity and Resilience Professional Public Sector
Russ Parramore MBCI, South Yorkshire Fire and Rescue

Continuity and Resilience Newcomer
Timothy Dalby-Welsh AMBCI, Needhams 1834

Continuity and Resilience Team
Chief Fire Officers Association

Continuity and Resilience Provider (Service/Product)
ClearView Continuity

Continuity and Resilience Innovation

Most Effective Recovery
BPER Banca

Industry Personality
Gianna Detoni FBCI, Panta Ray

James McAlister FBCI, Chairman of the Business Continuity Institute and host of the Awards Ceremony, commented: "Once again I have been impressed with the high standard of entry we had for the BCI European Awards. Each and every one of the nominees has done an incredible job in helping to build resilience in a world full of disruptions. I would like to offer my congratulations to all the winners who are a credit to the industry, and I am delighted that the Business Continuity Institute is able to honour their hard work and dedication through these awards."

Keith Tilley, EVP and Vice-chair of Sungard Availability Services, said: "Sungard Availability Services has a long history in supporting the advancement and development of the continuity, resilience and availability industry, whether across standards development, proactive involvement in industry for a or rewarding attainment. To this end we’re delighted to be sponsors of this year's BCI European Awards, which are designed to recognise the outstanding contributions of business continuity, risk and resilience professionals and organizations."

The Business Continuity Institute

More than two-thirds (70%) of IT managers at small and medium sized enterprises say budget considerations have forced them to compromise on security features when purchasing endpoint security, according to a survey by VIPRE. Overall, price was the top factor in endpoint security purchases (chosen by 53% of respondents), followed by ease of use (47%), feature set (41%), support (34%), advanced detection technology (31%), cloud-based management (29%) and ransomware (2%).

"SME IT managers need to better recognize the security dangers facing their organizations," said Usman Choudhary, chief product officer at VIPRE. "Ransomware alone was responsible for $1 billion in cyber-extortion payments last year, according to the FBI, but only 21% of survey respondents considered ransomware as a factor when they purchased endpoint security. We understand that price and budgets are a factor but forgoing advanced protection features such as those available through VIPRE can put a company at risk."

As ransomware attacks and awareness of the threat increases, 53% of respondents would recommend negotiating a payment to the attackers. This represents a significant increase from a 2015 survey where only 30% of IT security pros said they would negotiate. The current study also noted that 82% of companies suffering a cyber attack in the last year would negotiate a ransomware attack.

With ransomware on the rise, perhaps it is no surprise that phishing attacks remain the most pervasive cyber security threat. About 45% of IT managers have had to remove malware from an executive's computer due to phishing, a figure that rises to 56% for larger companies (351-500 employees).

Meanwhile, survey respondents also cited visits to porn websites (26%), letting a family member use a company-owned device (22%), attaching an infected USB stick or phone (22%) and installing a malicious app (21%) as reasons they had to remove malware. Only 25% said they have never been asked to remove malware from an executive's computer.

(TNS) - The Sutter Butte Flood Control Agency will host a meeting next Wednesday to update residents on how years of levee work and hundreds of millions of dollars in improvements have fared these past few months with the high water levels in the Feather River.

"Our focus will be on the work we've done on emergency repairs and some of the future work that is going to be done on the levee to rehabilitate the unimproved levees due to recent high-water events," said Mike Inamine, general manager of SBFCA.

The meeting is an opportunity for community members to learn about SBFCA's Feather River West Levee Project — how it fared during the recent storm events and the Lake Oroville spillway incident, and what still needs to be completed.



All-flash storage is clearly on the up-and-up. Gartner analyst Valdis Filks predicts that by 2020, 50 percent of data centers will use only all-flash arrays (AFA) for primary data, up from less than 1 percent in the middle of 2016. Overall, Gartner expects flash to be the dominant form of enterprise storage within a couple of years. By 2020, the firm predicts all-flash array revenue will reach $9.67 billion.

Who dominates in this field? Just about every vendor says they are number one. Cutting through the hype, what do the analysts say about who leads the way in all-flash arrays (AFAs)? Analyses by both International Data Corp. (IDC) and Gartner agree on some points. For example, both list the top AFA vendors as Dell EMC, Pure Storage, IBM, NetApp and HPE, though not necessarily in that order.



Thursday, 11 May 2017 14:21

Buying Guide: All-Flash Storage

Service Level Agreements (SLAs) need to cover all aspects of a business and their subsidies, which means they are often broad and generic and can leave your data center unprotected. The SLA made with the Original Equipment Manufacturers (OEMs) is used as a way to ensure timely repairs and any service needs. What often happens with a typical SLA, however, is providers can wait until the last minute of a quoted time frame to repair your systems, causing your business costly downtime. Doing this is not a breach of contract, though it can be frustrating for businesses who need to keep equipment in use full-time.

An enhanced support SLA can help avoid these pitfalls.  Enhanced SLAs can supplement your existing warranty, offer flexibility and cost savings, and extend the life of your equipment.

If your business has had problems in the past with an SLA, then it’s time to consider an enhanced support SLA.



Thursday, 11 May 2017 14:20

Data Centers Need Better SLAs

This is part 4 of a multi-part series on the Analytics Operating Model.

As we move forward in our blog series on the Analytics operating model, we pinpoint the essential processes for delivering analytics. Data may be a prerequisite for most of these processes, but a successful solution is born from both science and art. Unlocking business value requires a healthy dose of creativity to work with data in its native state – incomplete and confounding.

Pierre Teilhard de Chardin, a 19th century Jesuit priest and philosopher, may have been one of the first to encapsulate our need for creativity: “Our duty, as men and women, is to proceed as if limits to our ability did not exist. We are collaborators in creation.”

An elusive, but extremely important question in the advanced analytics journey is how to push past the limits and consistently discover new insights.  In one case, valuable insight may be revealed from a clustering algorithm which diagnoses data flow anomalies and points to a potential corporate network breach. In another case, a supervised machine learning model may be required to drive down false positive fraud alerts for consumer credit.



The data center of the future is a constantly evolving concept. If you go back to World War II, the ideal was to have a massive mainframe in a large room fed by punched cards. A few decades later, distributed computing promoted an Indiana Jones-like warehouse with endless racks of servers, each hosting one application. Virtualization upset that apple cart by enabling massive consolidation and greatly reducing the number of physical servers inside the data center.

Now it appears that we are entering a minimalist period: Data center spaces remain but have been so stripped down that all that remains are a few desktops in the center of an otherwise empty space. Like a magic trick by David Copperfield, the Lamborghini under the curtain has disappeared in a puff of smoke. But instead of showing up at the back of the room, the compute hardware has been transported to the cloud. And just as in a magic trick, IT operation managers are applauding loudly.

“We moved backup and disaster recovery (DR) to the cloud and now intend to move even more functions to the cloud,” said Erick Panger, director of information systems at TruePosition, a company that provides location intelligence solutions. “It looks like we are heading to a place where few real data centers will exist in most companies with everything being hosted in the cloud.”



A recent Bromium survey of 210 security professionals in the U.S. and U.K. found that 35 percent of respondents admitted having gone around, turned off, or bypassed their own corporate security settings.

Even more alarmingly, 10 percent of respondents admitted having paid a ransom or hid a breach without alerting their team.

"While we expect employees to find workarounds to corporate security, we don't expect it from the very people overseeing the operation," Bromium co-founder and CTO Simon Crosby said in a statement. "Security professionals go to great lengths to protect their companies, but to learn that their decisions don't protect the business is frankly rather shocking."

"To find from their own admission that security pros have actually paid ransoms or hidden breaches speaks to the human factor in cyber security," Crosby added.



Assurance functions are on the rise. With continued proliferation in global regulations and increased public scrutiny of corporate behavior, companies have made significant investments in assurance programs (e.g., compliance, information security, quality) and control systems. These investments are made to identify and manage the operational, compliance and reputational risks that affect an enterprise’s financial results and brand value.

Unfortunately, despite these investments, legal and other assurance executives like compliance officers and information technology executives feel no more capable of managing risks today than they did 10 years ago. This can largely be traced to the fact that the process of managing risk is complex, and there are often assurance mandates and requirements that overlap between teams. This overlap leads to boards that lack visibility into corporate risks, business leaders who are more risk averse and employees who struggle to get work done while navigating compliance requirements.

So what’s the answer? For most companies, it’s coordinated assurance.



The overriding theme of every disruption story I’ve ever heard is that firms thought they had more time than they did. So, I’ve been pondering the why. We can see disruption happening all around us, but whyis it so difficult to get out in front of it?

Then I slogged my way through Ray Kurzweil’s Law Of Accelerating Returns and it hit me. Digital disruption is about the clash between exponential change and our brain’s wanting things to be linear. Here is what I mean:



Wednesday, 10 May 2017 13:30

Why You Are Getting Disrupted

If it ain’t broke, don’t fix it. That’s been the mantra for the data center throughout much of the IT era, but at what point does the enterprise have to consider the very real possibility that without significant upgrades, the data center of today will no longer provide the support needed for modern applications and workflows?

At the moment, much of the industry is engaged in digital transformation to a new services-driven economy, and it is becoming clear that yesterday’s data infrastructure is woefully inadequate to the task. So without question, it must be modernized, and quickly. The question is, how? Is there value in revamping the local data center for the digital age, or should the enterprise go cloud-native?

No matter how you do it, says VMware’s Muneyb Minhazuddin, the overriding goal should be to abstract infrastructure away from hardware so applications can achieve the flexibility they need to produce real value. The enterprise should start by mapping out which apps require on-premises infrastructure and which can go to the public cloud. Once you have an idea of where you want to be at each point in the transitional timeline, you can set about making the necessary changes in hardware and software. And all the while, you should see steadily improving agility and a greater capacity to innovate as software-defined infrastructure takes hold.



We recently asked Cutter Senior Consultant San Murugesan a question: If you consider the transformation of business to be phenomenal thus far, what do you expect the future of business will be? He answered our questions in his opening statement of a Cutter Business Technology Journal issue focused on the business opportunities in the new digital age:

“Well, it’s definitely not going to be ‘business as usual.’ The business landscape is poised for an unprecedented wave of further innovations and changes. How these will emerge, who will be the leading players in different sectors, and how the changes will affect us — average people in both advanced and developing countries, young and old — are still unknown. Nevertheless, we can make educated guesses, which may eventually become reality.”



Over the past few years, I’ve had the wonderful opportunity to travel the world and visit factories, distribution centers, ports, warehouses, and several offices for the company where I work. Apart from being a great way to see the world, it has also been an opportunity to learn from the ways different cultures see and manage risk.

Coming from Latin America, it was clear to me that the concept of risk management was something not highly promoted or recognized in the region. Companies that operated locally took the approach of using intermediaries to transfer their risks to insurance companies. Occasionally I would find buyers focused on managing their own risks efficiently. But that was more than a decade ago. During my most recent trips to South America, I had the opportunity to see the implementation of a regional affinity program—a collaboration between a well-known broker and our company’s financial operations. In this case, those involved were highly educated in insurance concepts and their understanding of risk acceptance was completely in line with more developed markets.

Another interesting aspect of dealing with this program was the strong relationship between the broker and our office. It was a very cordial and open communication that transcended the usually formal interaction between these parties—and included text messages flying back and forth to get the deal done. In a way, the warm personality of South Americans permeated the business environment. So when it comes to this colorful part of the world, business is, in fact, personal.



Forrester has just published our updated forecast for the US tech market for 2017-2018 (see “US Tech Market Outlook For 2017 And 2018: Mostly Sunny, With Clouds And Chance Of Rain”). We are forecasting growth of 4.8% in 2017 and 5.2% in 2018 for US business and government spending on tech goods, services, and staff. This forecast assumes moderate US economic growth (2% to 2.5% real GDP growth, 4% to 4.5% nominal GDP growth). Considering  this economic outlook, our updated 2017 forecast is slightly less positive than our December forecast (4.8% vs. 5.1%) for US budget growth in 2017, with our new 2018 forecast pointing to a modest improvement next year.

Three main themes define our updated forecast:



More than 80 percent of Americans are more concerned about their online privacy and security today than they were a year ago, a recent AnchorFree survey [PDF] of more than 2,000 Americans found.

Following the recent passage of a bill allowing ISPs to collect users' personal data without their permission, the survey found that over 95 percent of respondents are concerned about companies collecting and selling their personal information without their consent, and more than 50 percent are looking for new ways to safeguard their personal data.

The survey also found that while 70 percent of respondents are doing more today to protect their online privacy than they were a year ago, just one in four believe they're ultimately responsible for ensuring safe and secure Internet access.



The Business Continuity Institute


Despite rising awareness of the threats posed by users with privileged access permissions, most organizations still allow a myriad of internal and external parties to access their most valuable systems and data. Many are placing trust in both employees and third parties without a proven means of managing, controlling, and monitoring the access that these individuals, teams and organizations have to critical systems and networks.

Bomgar's 2017 Secure Access Threat Report revealed that 90% of security professionals trust employees with privileged access most of the time, but only 41% trust these insiders completely. Despite placing a lot of trust in employees by granting them privileged access, security professionals are paradoxically aware of the numerous risks that these individuals pose to the business. While most were not primarily worried about breaches of malicious intent, they were concerned that a breach was possible due to employees unintentionally mishandling sensitive data, or that employee’s administrative access or privileged credentials could easily be phished by cyber criminals. Yet, businesses are still falling behind with only 37% of respondents having complete visibility into which employees have privileged access, and 33% believing former employees could still have corporate network access.

Generally, employees want to be productive and responsible at work, suggesting that most employees are not malicious, but rather skirt security best practices to speed up productivity. This is driving the need for access solutions that prioritize both productivity and usability, without sacrificing security, that can be seamlessly integrated into applications and processes that employees already use.

“It only takes one employee to leave an organization vulnerable,” said Matt Dircks, Bomgar CEO. “With the continuation of high-profile data breaches, many of which were caused by compromised privileged access and credentials, it’s crucial that organizations control, manage, and monitor privileged access to their networks to mitigate that risk. The findings of this report tell us that many companies can’t adequately manage the risk related to privileged access. Insider breaches, whether malicious or unintentional, have the potential to go undetected for weeks, months, or even years – causing devastating damage to a company.”

The report also uncovered that data breaches through third-party access are widespread. External suppliers continue to be an integral part of how most organizations do business. On average, 181 vendors are granted access a company’s network in any single week, more than double the number from 2016. In fact, 81% of companies have seen an increase in third-party vendors in the last two years, compared to 75% the previous year.

With so many third-parties granted access to an organization’s systems, perhaps it’s no surprise that more than two thirds (67%) have already experienced a data breach that was ‘definitely’ (35%) or ‘possibly’ (34%) linked to a third-party vendor. While 66% of security professionals admit that they trust third-party vendors too much, action has not followed this recognition. Processes to control and manage privileged access for vendors remains lax, as evidenced by only 34% of respondents being totally confident that they can track vendor log-ins, and not many more (37%) confident that they can track the number of vendors accessing their internal systems.

“As with insiders, third-party privileged access presents a multitude of risks to network security. Security professionals must balance the business needs of those accessing their systems – whether insiders or third-parties – with security,” added Dircks. “As the vendor ecosystem grows, the function of managing privileged access for vendors will need to be better managed through technology and processes that provide visibility into who is accessing company networks, and when, without slowing down business processes.”

BATON ROUGE, La. — Kim Aucoin moved to Baton Rouge from Charlotte, North Carolina, in March 2016. She was raised in Lafayette and was happy to once again live in her home state of Louisiana. Little did she know that just five months later the area would be devastated by historical flooding.

“My landlord came to the house and said ‘Get out, we’re going to flood,’” said Aucoin. The home had never flooded before, even during a big flood event in 1989, but she said this time her landlord didn’t want to take any chances. Aucoin and her husband, Randy, evacuated to her boss’s home in Prairieville, but before leaving, they placed sandbags around the property. The sandbags didn’t help; the house took in 16 inches of water. 

Aucoin had hazard insurance for her rental home, but it didn’t include damages from rising water.  They did not purchase flood insurance. “I work for an insurance company so why I didn’t get it was just stupidity,” said Aucoin. She wasn’t alone, 39 percent of the residents who flooded in August were not living in a flood-prone area and some didn’t have flood insurance coverage.

While their landlord repaired and renovated the damaged home, the Aucoins lived in a small trailer they borrowed for a few weeks. Then they moved into a hotel and were pleased to find out FEMA’s Individuals and Households Program (IHP) would reimburse them for hotel expenses.  “We received FEMA money within five business days,” Aucoin said. The money was electronically deposited into their bank account which made the process fast and convenient.

Even though the Aucoins contents weren’t a total loss, they still qualified for FEMA assistance and filed a claim.  “It helped us start to replace things,” said Aucoin. Another big help was receiving a Louisiana Electronic Benefit Transfer (EBT) card. “We lost all of our food in the flood and neither the trailer or hotel had a kitchen so it was very helpful.” The $300 EBT card was reloaded once, totaling 600 disaster dollars to assist with grocery expenses.

The Aucoins are some of the fortunate flood survivors in the sense that they were able to move back into their rental house just two months after the August floods. And this time they have flood coverage through the National Flood Insurance Program (NFIP). A smart move since hurricane season begins June 1st and there is a 30-day waiting period between purchasing a policy and the date it goes into effect.  Despite the unsettling start the couple plans to stay in Baton Rouge. Aucoin said, “It’s been a rough few months, but I’m glad to be here.”

NFIP Facts:

  • In Louisiana, flood-related events occur every year.
  • The National Flood Insurance Program (NFIP) provides contents as well as structure coverage for home and business owners.
  • The average annual cost of flood insurance is about $700. Depending on the policy, insurance holders may receive up to $250,000 for home damage.
  • NFIP policies offer coverage for flood damage that federal disaster assistance and most homeowners insurance policies do not cover.
  • NFIP payments are not dependent on state or federal disaster declarations.
  • New flood insurance policies go into effect 30 days after purchase.
  • More than 39 percent of structures flooded in August were located in low- and moderate- risk areas.
  • Properties outside of the Special Flood Hazard Area (SFHA) account for more than 20 percent of the country’s NFIP claims and receive a third of flood-related federal disaster assistance.

Go to www.floodsmart.gov to learn more about any property’s flood risk, estimate an NFIP premium or locate an insurance agent who sells flood insurance.

Visit Floods | Ready.gov for flood information and safety tips.

(TNS) - Higher education and public safety officials across the state say a bill aimed at making campuses safer by allowing people to carry concealed handguns on school grounds would have the opposite effect.

“Based on my 40 years in law enforcement, I know that when there are more guns allowed, there is more risk and less safety,” said Roland LaCroix, chief of campus police at the University of Maine in Orono.

The bill, LD 1370, would require Maine’s universities, community colleges and Maine Maritime Academy to allow people to carry concealed handguns on campus. The Legislature’s Committee on Education and Cultural Affairs held a public hearing on the bill this week, where it met widespread resistance from higher education and public safety officials.



In our Business Continuity and Disaster Recovery planning, we spend much of our time assessing, documenting and developing strategies for when an event may occur. This is all to prepare for or prevent an outage. What is the point of all these preparations? When disaster strikes, you want to get back to normal as quickly as possible. It’s important to go through these three phases of disaster recovery.



Tuesday, 09 May 2017 15:11

The Three Phases of Disaster Recovery

Key Message:

  • Section 404 hazard mitigation and Section 406 hazard mitigation funding are distinct programs with  key differences in their scope, purpose and funding.

Section 404 – Hazard Mitigation Grant Program

  • The 404 funding is used to provide protection to undamaged parts of a facility or to prevent or reduce damages caused by future disasters.
  • The entire state - not just presidentially declared counties - may qualify for 404 mitigation projects.
  • The 404 grant is managed by the State under funding provided for in the Stafford Act. Section 404 mitigation measures are funded under the Hazard Mitigation Grant Program (HMGP).
  • The State receives a percentage of the Total Federal share of the declared disaster damage amount (20%), which it uses to fund projects anywhere in the State, regardless of where the declared disaster occurred or the disaster type.
  • Applicants who have questions regarding the Section 404 mitigation program should contact the State Hazard Mitigation Officer, Tim Cook, 253-512-7072, This email address is being protected from spambots. You need JavaScript enabled to view it..
  • 404 grant funding may be used in conjunction with 406 mitigation funds to bring an entire facility to a higher level of disaster resistance, when only portions of the facility were damaged by the current disaster event.
  • All subapplicants for HMGP must have a FEMA-approved local or Tribal Mitigation Plan at the time of obligation of grant funds for mitigation projects.
    • The Regional Administrator may grant an exception to the local or Tribal Mitigation Plan requirement in extraordinary circumstances when justification is provided. If this exception is granted, a local or Tribal Mitigation Plan must be approved by FEMA within 12 months of the award of the project subaward to that community.

Section 406 – Public Assistance Program

  • The 406 grant is managed by the State under funding provided for in the Stafford Act. Section 406 mitigation measures are funded under the Public Assistance, or Infrastructure, program (PA).
  • The 406 funding provides discretionary authority to fund mitigation measures in conjunction with the repair of the disaster-damaged facilities, so is limited to declared counties and eligible damaged facilities.
  • Section 406 is applied on the parts of the facility that were damaged by the disaster and the mitigation measure directly reduce the potential of future, similar disaster damages to the eligible facility.
  • Applicants who have questions regarding the Section 406 mitigation program should contact the State Public Assistance Officer assigned to their projects.

Last week, I wrote a bit about the dangers of passwords and the relationship with the Google Docs phishing scam that recently broke. Today, I’m going back to the Google Docs issue, but to look at it from a different angle: how scammers continue to use social engineering so successfully.

An eSecurity Planet article touched on this:

Fidelis Cybersecurity threat research manager John Bambenek said by email that the attack is a stark reminder that criminals and nation states are targeting the one thing technology can't fix -- the user. "If you can trick the user into compromising themselves, you have no need for a zero-day," he said. "Security awareness and vigilance of end users are the key to the security of any system."

This echoes what Nathan Wenzler, chief security strategist at AsTech, told me in an email message. Hackers are using attacks such as ransomware and honed spearphishing campaigns to go after the weakest link: people, adding:



(TNS) - Think outside the levee.

As concerns about the state's aging flood-control infrastructure grow, experts are seeking ways to address the San Joaquin River's big-time risks in less traditional ways.

We'll still need to strengthen our levees and dams in the future, of course. But a recently released draft plan contains some new and creative ideas that could help save hundreds of lives and prevent billions of dollars in damages.

There may be other benefits, too: Improving conditions for endangered fish, reducing pollution, or providing new recreational opportunities.



Hurricane season has yet to begin and already record-setting flooding in parts of the central United States will likely become the country’s sixth billion-dollar disaster event of 2017.

While Missouri and Arkansas have been hit the hardest, recent flooding in the central U.S. has been widespread and it will likely take weeks before the full extent of flood damages is known.

So far, 2017 has seen five billion-dollar disaster events, including one flooding event, one freeze event, and 3 severe storm events, according to NOAA.



The Business Continuity Institute

Research commissioned by Crises Control from the Business Continuity Institute for their annual cyber resilience report 2016 confirms much of what we already suspected about the changing nature of the cyber threat and the way that cyber criminals have found new ways past corporate perimeter security.

66% of respondents to the survey reported that their companies had been affected by at least one cyber security incident over the last 12 months. The costs of these incidents varied greatly, with 73% reporting total costs over the year of less than €50,000, but 6% reporting annual costs of more than €500,000.

The increased difficulty of breaching perimeter security and the increased human resources available to cyber criminals has combined to produce a new point of attack. This is focused on the weakest link in the corporate security chain, which is now human beings rather than technology.

The term “social engineering" describes this attack vector, which relies heavily on human interaction and often involves tricking people into breaking normal security procedures. The BCI research shows clearly that phishing (obtaining sensitive data through false representation) and social engineering is now the single top cause of cyber disruption, with over 60% of companies reporting being hit by such an incident over the past 12 months. A further 37% were hit by spear phishing (phishing through identity fraud).

The research has also confirmed that to effectively counter this threat companies now need behavioural threat detection, provided by a cyber security network monitoring solution. These plugin devices monitor your network for signs of suspicious insider activity and failed attempts to hack into the system. They can also provide invaluable intelligence to be acted upon proactively to nip a successful hack or insider threat in the bud.

Traditional anti-virus monitoring software is no longer enough. The BCI research shows that 72% of companies have this software in place, but only 26% of real cyber security incidents were actually discovered through this route. Much worse, 18% of incidents came to attention through an external source such as a customer, a supplier or the impact on a public website.

Network monitoring solutions are much more effective than anti-virus software in terms of alerting companies to a cyber breach, with 63% of companies having a network monitoring software in place and 42% of cyber incidents coming to attention through the work of the IT department to whom such systems report.

The scale of the cyber threat can feel overwhelming at times. But educating your own employees about the nature of the threat and then putting in place the right solutions can go a long way towards mitigating the social engineering threat and significantly enhancing your corporate cyber resilience. Act now before it is too late.

Sonny Sehgal and Adam Blake, from Crises Control partners Transputec and ThreatSpike, will be talking about the social engineering threat in their webinar on cyber security and the insider threat during Business Continuity Awareness Week 2017 on Tuesday 16th May.

The Business Continuity Institute

Having an effective business continuity programme does not just mean making sure your own organization has a plan in place to deal with disruptions, it also means ensuring that your supply chain is resilient too. How would your organization cope if your supplier was no longer able to supply, or perhaps their supplier and so on? As the saying goes: you’re only strong as your weakest link.

The 2016 Supply Chain Resilience Report, published by the Business Continuity Institute in collaboration with Zurich Insurance Group, showed that one in three organizations had experienced cumulative losses of over €1 million during the previous year as a result of supply chain disruptions. Furthermore, the report showed that 70% of organizations had experienced at least one supply chain disruption during this same time period, while 22% had experienced at least eleven.

Has your organization experienced a disruption to its supply chain? What were the causes and consequences of those disruptions? Help inform our next Supply Chain Resilience Report by taking a few minutes to complete the survey, and be in with a chance of winning a £100 Amazon gift card.

Tuesday, 09 May 2017 14:33

BCI: Managing the supply chain

How well do you understand your commercial partners’ compliance programs? Recent eye-popping settlements have reminded non-U.S. companies of the danger of failing to comply with U.S. sanctions and export control laws. But strengthening your own compliance program will not provide complete protection when your business partners are targeted by authorities. An unexpected enforcement action against a key supplier or financial institution can disrupt the flow of goods and services along global supply chains and threaten well-established trading networks.  Customers, lenders, manufacturers and retailers, among others, are taking a closer look at their counterparties and asking for stronger legal protections against follow-on sanctions and export control risks.

Extraterritorial Power

The U.S. government has an impressive arsenal of tools for enforcing laws against non-U.S. persons for conduct taking place outside the United States, especially in the areas of sanctions and export controls (collectively, “sanctions”).

The Office of Foreign Assets Control (OFAC) has long been known for its ability to sanction non-U.S. actors who threaten U.S. national security and policy. Individuals and entities who appear on the OFAC list of Specially Designated Nationals (SDNs) are virtually excluded from the U.S. financial system, not to mention the growing number of non-U.S. banks that “voluntarily” follow OFAC regulations to de-risk their operations.



Data security has traditionally been seen as a matter of locking down data in a physical location, such as a data center. But as data migrates across networks, borders, mobile devices, and into the cloud and Internet of Things (IoT), focusing solely on the physical location of data is no longer relevant.

To prevent disclosure of sensitive corporate data to unauthorized people in this new corporate environment, data needs to be secured. Encryption and data masking are two primary ways for securing sensitive data, either at rest or in motion, in the enterprise.  It is an important part of endpoint security.

Encryption is the process of encoding data in such a way that only authorized parties can access it. Using homomorphic encryption, sensitive data in plaintext is encrypted using an encryption algorithm, generating ciphertext that can only be read if decrypted.



Is it time to put the public vs. private/hybrid cloud debate behind us? Like Mac vs. PC or open vs. proprietary, it seems that the biggest arguments over technology have a shelf-life, and the time to put conflicts over cloud infrastructure is nearing its end.

The reason is simple: In an age of virtual, abstract data environments, the enterprise is no longer limited to stark choices when it comes to resource configurations.

While it’s true that, as InfoWorld’s David Linthicum points out, public cloud providers are pushing the envelope on emerging technologies like artificial intelligence and serverless computing, the fact remains that local infrastructure still provides unique capabilities that cannot be matched by third-party infrastructure, no matter how advanced. This goes way beyond the security issue, which some say is better in the public cloud, to factors like latency, data residency, governance and single-vendor lock-in.



A massive phishing campaign impersonating a request to share Google Docs documents hit inboxes worldwide earlier this week.

Victims who clinked on links in the emails were asked to share access to their Gmail contact lists and Google Drive, the New York Times reports -- and those contact lists were then used to distribute the attack to victims' contacts.

In a statement, Google said, "We have taken action to protect users against an email impersonating Google Docs, and have disabled offending accounts. We've removed the fake pages, pushed updates through Safe Browsing, and our abuse team is working to prevent this kind of spoofing from happening again."



Page 1 of 3