DRJ Fall 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 32, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (206)

(TNS) — U.S. Geological Survey geophysicist Sarah Minson was in the thick of efforts to develop an earthquake warning system in California when a series of major temblors struck the sparsely populated community of Ridgecrest in the Mojave Desert this summer. The largest, a magnitude 7.1 quake on July 5, was the biggest to hit the state in decades.

We asked her about her work — and how this month’s big quakes is helping scientists refine California’s fledgling earthquake alert system.

...

https://www.govtech.com/public-safety/Recent-Quakes-Have-Helped-Scientists-Hone-ShakeAlert.html

(TNS) — Beaver County officials plan to improve emergency communication after a series of alerts about last weekend's chemical release in Rochester left residents scratching their heads.

About 27,000 residents received notice of a shelter advisory Friday evening after a fire at a former pool chemical site emitted pungent chlorine fumes in a five-mile radius of New York Avenue in Rochester. But the alert was vague, residents said, and because it was sent via Swift911, a subscription service, many residents didn't receive it.

The Beaver County board of commissioners met with emergency services officials Tuesday to discuss how the situation was handled. Commissioner Tony Amadio said that while the situation was overall handled well, there were concerns. Two of the county's top emergency services personnel were out of state, which caused some issues, he said.

...

https://www.govtech.com/em/safety/Beaver-County-Pa-Officials-to-Improve-Emergency-Communications.html

Most organizations do a good job when it comes to developing plans to protect their staff in the event of an emergency. However, there are several other key tasks that often go overlooked.

In today’s post we’ll look at the six tasks that every organization should address in the plans it draws up to be in readiness for when an emergency strikes.

There is a right and wrong time for an organization to figure out how it’s going to respond to various types of emergencies. The wrong time is in the seconds after the fire alarm goes off, or trouble announces itself in some other way.

Responsible organizations plan ahead of time for emergencies, considering the different types of problems likely to occur and developing ways of dealing with them. They think about categories of an event (rather than specific problems), and they produce their plans in simple checklist form, excluding policy statements (or consigning such statements to the back). This is so their plans consist of simple steps that can be readily understood, taken, and checked off in the heat of an emergency. They also stage frequent drills, so their staffs are familiar with where the plans are and know their role in carrying them out.

...

https://www.mha-it.com/2019/07/17/6-tasks-every-emergency-plan-should-address/

At Forrester, it is our goal to be ahead of the market trends so we can advise clients on what is to come and how they should prepare. Each year, we publish a series of predictions reports about what may be of primary concern for various roles over the course of the coming year. Rather than using our market insight and intuition to predict what may happen in technology and business, what if we were able to see the future? If we could go in a time machine, what would differentiate businesses’ security postures and practices 10, 50, or even 100 years into the future?

Unfortunately, Forrester has not built a time machine (yet!), but several reports that the security and risk team has published in the last month can help practitioners prepare their security programs for far in the future:

...

https://go.forrester.com/blogs/the-security-snapshot-forrester-time-machine/

New Orleans averted disaster this month when tropical storm Barry delivered less rain in the Crescent City than forecasters originally feared. But Barry’s slog through Louisiana, Arkansas, Tennessee and Missouri is just the latest event in a year that has tested levees across the central U.S.

Many U.S. cities rely on levees for protection from floods. There are more than 100,000 miles of levees nationwide, in all 50 states and one of every five counties. Most of them seriously need repair: Levees received a D on the American Society of Civil Engineers’ 2018 national infrastructure report card.

Levees shield farms and towns from flooding, but they also create risk. When rivers rise, they can’t naturally spread out in the floodplain as they did in the pre-flood control era. Instead, they flow harder and faster and send more water downstream.

...

https://www.govtech.com/em/preparedness/As-Flood-Risks-Increase-Across-the-US-Its-Time-to-Recognize-the-Limits-of-Levees.html

Sandy Hook, Boston, Las Vegas, Parkland and Pittsburgh.  Those locations now have secondary meanings; mass casualty events.  Each having their own community impact and recovery process.

Response plans are created during the calm and quiet of a work day.  A variety of exercises are conducted to test those plans and modify them accordingly to meet their operational goals and needs.

These plans use real world lessons to help frame and update response protocols.

Public Safety agencies involved can be police, fire, EMS & OEM.  Steve Crimando, internationally recognized crisis management and trauma consultant, refers to these exercises as “Stop the Killing and Stop the Dying”.  You see them on the news… SWAT or a rapid response unit responds to an active shooter/hostile event call.  They enter the building/area to locate and/or neutralize the threat actor(s).

...

https://www.preparedex.com/behavioral-assumptions-crisis-impact-mass-casualty-incident-response-mental-health/

(TNS) — Low-interest federal disaster loans are now available to certain private nonprofit organizations in Osage and Nowata counties following President Donald Trump's federal disaster declaration for Public Assistance as a result of severe storms, tornadoes, straight-line winds and flooding that occurred April 30 - May 1, announced acting Administrator Christopher M. Pilkerton of the U.S. Small Business Administration.

Private nonprofits that provide essential services of a governmental nature are eligible for assistance.

These low-interest federal disaster loans are available in Alfalfa, Atoka, Bryan, Coal, Craig, Kay, Lincoln, Love, Major, Noble, Nowata, Okmulgee, Osage, Ottawa, Pittsburg, Pushmataha, Stephens and Tillman counties.

...

https://www.govtech.com/em/disaster/Federal-Disaster-Loans-Offered-to-Certain-Private-Nonprofits.html

The CCPA, which goes into effect in six months, will cover data beginning in January 2019, so the time to prepare is now. Aparavi’s CTO Rod Christensen discusses the steps companies must take to ensure compliance as soon as possible.

The purpose of the California Consumer Privacy Act (CCPA) is mainly to rein in the use and sale of personal information by large companies for purposes such as advertising. This doesn’t mean the rest of us are off the hook for CCPA compliance, however. Let’s look briefly at some of the reasons the CCPA law may apply to you and what it covers.

...

https://www.corporatecomplianceinsights.com/compliance-preparing-for-ccpa/

Innovation isn’t just having a few bright ideas. It’s about creating value and helping organizations continuously adapt and evolve. ISO is developing a new series of International Standards on innovation management, the third of which has just been published.

Innovation is an increasingly important contributor to the success of an organization, enhancing its ability to adapt in a changing world. Novel and innovative ideas give rise to better ways of working, as well as new solutions for generating revenue and improving sustainability. It is closely linked to the resilience of an organization, in that it helps them to understand and respond to challenging contexts, seize the opportunities that that might bring and leverage the creativity of both its own people and those it deals with.

Ultimately, big ideas and new inventions are often the result of a long series of little thoughts and changes, all captured and directed in the most effective way. One of the most efficient ways of doing just that is through implementing an innovation management system.

...

https://www.iso.org/news/ref2414.html

Last week, the United States Conference of Mayors adopted a resolution against paying ransoms. What’s interesting about this is it’s creating what is essentially a vertical front of communities against ransomware. It may well disincentivize attackers from targeting US towns and cities. I’m hopeful and encouraged by this action, but I worry that this resolution is a dismissal of culpability and should have been about investing in cybersecurity before a ransomware outbreak, instead of advertising that we’d rather jump on a sword than pay a ransom.

I’ve been writing about the need for ransomware victims to prioritize their self-interest and consider paying ransom if they can establish that the actor will credibly provide decryption keys and that recovery would be discernably less costly in doing so. One of the common responses I’ve received in this regard is that I’m encouraging the creation of a ransomware market because the act of paying ransoms encourages more actors to get involved in this space — supply and demand.

...

https://go.forrester.com/blogs/the-rising-tide-of-ransomware-requires-a-commitment-best-practices/

As flexible working becomes the new normal, how can risk directors feel confident company data is secure?

The era of digital transformation is well underway. As new technologies – such as artificial intelligence and blockchain –increasingly become the arteries of industry, data has become the lifeblood for businesses. And yet many employees have very little knowledge about how to protect it.

According to IT consultancy ESG Cybersecurity, more than half of organisations report a “problematic shortage” of cybersecurity skills within their company. Globally, we’re currently experiencing a cybersecurity workforce gap of 2.9 million employees, according to research from IT security training organisation ISC².

Risk directors are racing against the clock to assess potential threats to sensitive company information. And, for some, the growing global trend for flexible working may seem to be one of them. According to research from Australian cloud data security company Rackspace, letting staff and third parties access data remotely is seen as the greatest threat to cybersecurity by executives.

...

https://www.regus.com/work-us/flexible-working-cybersecurity/

Monday, 15 July 2019 14:13

Flexible working and cybersecurity

In over twenty years in the business, I’ve seen it all in terms of how our clients treat us at MHA as consultants, partners, and people. Most clients are great, but a few have made our lives miserable and have never quite learned how to treat a BCM consultant.

In today’s post, we’ll look at what differentiates good business continuity management (BCM) clients from bad—and explain how it benefits your company to have a healthy BCM consultant relationship.

Some clients are a joy to work with, and some are a pain in the you-know-what. What makes a client one or the other? I’ll get into the details of how to treat your BCM consultant in a moment.

...

https://bcmmetrics.com/bcm-consultant-relationship/

The new California Consumer Privacy Act (CCPA) is shaping up to be the toughest privacy law in the U.S. Nymity’s Chief Global Privacy Strategist, Teresa Troester-Falk, discusses what organizations need to do to adapt to the changing U.S. privacy law landscape.

Would you find it surprising that almost half of privacy officers consider building a privacy program as their top priority? Perhaps one would expect that privacy programs would have been built in the run-up to the GDPR compliance deadline (May 25, 2018). In our view, this is an indication that companies may be treating compliance as a tactical “checklist” project and are now struggling with how to handle the multitude of privacy laws that just keep coming.

The Need for Timely Compliance

If reporting on the status of your data privacy compliance has not yet become a focus or priority for your board, it soon will be. Corporations and, in particular, corporate directors have a number of responsibilities and liabilities as part of their compliance and oversight obligations. Privacy is becoming an increasingly important topic at the board table and shareholders are also holding their boards accountable. Just last year, a shareholder suit was launched against a U.S. public company and some of its officers and directors for allegedly making false and misleading statements to investors about the impact of privacy regulations and the third-party business partners’ privacy policies on the company’s revenue and earnings. While we expect GDPR compliance to remain high on the radar of corporate boards, focus will expand as organizations turn their attention to the United States with the passing of state-level privacy legislation in California and Nevada, as well as numerous other states with legislation in flight.

...

https://www.corporatecomplianceinsights.com/gdpr-ccpa-challenges-privacy-compliance/

Urbanization is increasing, placing pressure on resources and infrastructure like never before. There’s no stemming the tide, so city leaders need to build resilience in order to cope. Work on a new International Standard for urban resilience, led by the United Nations, has just kicked off, aiming to help local governments build safer and more sustainable urban environments.

City living is where it’s at. The top 600 cities in the world house 20 % of the global population but produce 60 % of the world’s GDP, and the numbers are growing. It is estimated that, by 2050, 68 % of us will be living in cities), increasing the scale of impact when disasters strike. Which they will. In 2018, for example, more than 17 million were displaced by sudden-onset disasters such as floods). With climate change making such disasters more frequent and less predictable, urban areas need to be prepared.

Work has now started on a new ISO standard for urban resilience, aimed at supporting national and local governments build their capacity to face the new challenges arising from climate change and shifting demographics. It will define a framework for urban resilience, clarify the principles and concepts, and help users to identify, implement and monitor appropriate actions to make their cities more resilient.

...

https://www.iso.org/news/ref2412.html

Originally appeared on the DCIG blog.

 

By 

As more organizations embrace a cloud-first model, everything in their IT infrastructure comes under scrutiny, to include backup and recovery. A critical examination of this component of their infrastructure often prompts them to identify their primary objectives for recovery. In this area, they ultimately want simplified application recoveries that meet their recovery point and time objectives. To deliver this improved recovery experience, organizations may now turn to a new generation of disaster-recovery-as-a-service (DRaaS) offerings.

A Laundry List of DRaaS’ Past Shortcomings

DRaaS may not be the first solution that comes to mind to improve their recovery experience. They may not even believe DRaaS solutions can address their recovery challenges. Instead, DRaaS may imply that organizations must first:

  1. Figure out how to pay for it
  2. Accept there is no certainty of success
  3. Do an in-depth evaluation of their IT infrastructure and applications
  4. Re-create their environment at a DR site
  5. Perform time consuming tests to prove DRaaS works
  6. Dedicate IT staff for days or weeks to gather information and perform DR tests

This perception about DRaaS may have held true at some level in the past. However, any organizations that still adhere to this view need to take a fresh view of how DRaaS providers now deliver their solutions.

The Evolution of DRaaS Providers

DRaaS providers have evolved in four principal ways to take the pain out of DRaaS and deliver the simplified recovery experiences that organizations seek.

1. They recognize recovery experiences are not all or nothing events.

In other words, DRaaS providers now make provisions in their solutions to do partial on-premises recoveries. In the past, organizations may have only called upon DRaaS providers when they needed a complete off-site DR of all applications. While some DRaaS providers still operate that way, that no longer applies to all of them.

Now organizations may call upon a DRaaS provider to help with recoveries even when they experience just a partial outage. This application recovery may occur on an on-premises backup appliance provided by the DRaaS provider as part of its offering.

2. They use clouds to host recoveries.

Some DRaaS providers may still make physical hosts available for some application recoveries. However, most make use of purpose-built or general-purpose clouds for application recoveries. DRaaS providers use these cloud resources to host an organization’s applications to perform DR testing or a real DR. Once completed, they can re-purpose the cloud resources for DR and DR testing for other organizations.

3. They gather the needed information for recovery and build out the templates needed for recovery.

Knowing what information to gather and then using that data to recreate a DR site can be a painstaking and lengthy process. While DRaaS providers have not eliminated this task, they shorten the time and effort required to do it. They know the right questions to ask and data to gather to ensure they can recover your environment at their site. Using this data, they build templates that they can use to programmatically recreate your IT environment in their cloud.

4. They can perform most or all the DR on your behalf.

When a disaster strikes, the stress meter for IT staff goes straight through the roof. This stems from, in part, few, if any of them have ever been called upon to do a DR. As a result, they have no practical experience in performing one.

In response to this common shortfall, a growing number of DRaaS providers perform the entire DR, or minimally assist with it. Once they have recovered the applications, they turn control of the applications over to the company. At that point, the company may resume its production operations running in the DRaaS provider’s site.

DRaaS Providers Come of Age

Organizations should have a healthy fear of disasters and the challenge that they present for recovery. To pretend that disasters never happen ignores the realities that those in Southern California and Louisiana may face right now. Disasters do occur and organizations must prepare to respond.

DRaaS providers now provide a means for organizations to implement viable DR plans. They provide organizations with the means to recover on-premises or off-site and can do the DR on their behalf. Currently, small and midsize organizations remain the best fit for today’s DRaaS providers. However, today’s DRaaS solutions foreshadow what should become available in the next 5-10 years for large enterprises as well.

Thursday, 11 July 2019 21:22

DRaaS Providers Come of Age

When dealing with a crisis, a leader’s role is largely to guide others through it. Good leaders understand that everyone responds to crisis differently and know that they must be prepared for the myriad ways people may react when faced with tragedy.

Tragedies often trigger additional tragedies. When under the influence of the shock of traumatic stress, people and organizations often make errors in judgment that lead to additional losses. Rash high-risk decisions and behaviors, precipitous resignations, hostile blaming, drunk driving charges, violence at home and work, and increased suicide risk are examples of how traumatized people can make a bad situation worse.

When people are shocked by a tragedy, immediate chemical and neuro-psychological adjustments take place to address the present threat in one of three ways: Fight, Flight, or Freeze. Whereas these responses can have short-term survival value in the midst of a crisis, they often do not translate well to productivity in today’s work environments.

...

https://www.riskandresiliencehub.com/how-to-contain-the-chaos-and-empower-your-employees-during-a-crisis/

(TNS) — You’ve been meaning to put together a disaster supplies kit ever since you heard predictions that a Cascadia quake and tsunami would impact most everyone around you.

Last week’s record-breaking, back-to-back earthquakes in California might have made the need to assemble essentials in one place seem more urgent.

Ready for the ‘Big One’? What you need now to prepare for an earthquake

Here are the three actions you have to do before you face a disaster.

You can build an emergency preparedness kit yourself, following guidelines by the American Red Cross and Ready.gov, or you can buy a ready-made survival pack.

...

https://www.govtech.com/em/preparedness/Here-are-Six-Emergency-Kits-to-Help-you-Prepared-for-an-Earthquake.html

Cyber threats are simply a business reality in the modern age, but with the right knowledge and tools, we can protect our businesses, employees and customers. Davis Malm’s Robert Munnelly outlines five actions companies can take to maximize long-term cyber safety.

Decades of experience in the age of broadband and security breaches has taught us important lessons about the steps companies should take to protect themselves, employees and customers from cybersecurity threats. Every company should make an effort to adopt specific action items so as to maximize opportunities for long-term cyber safety in this increasingly interconnected world.

Following are five actions companies must take to prepare.

...

https://www.corporatecomplianceinsights.com/cyber-safety-minimize-risk/

At the professional level, the critical tasks leading to, during and following a disaster involve coordinating multiorganizational, intergovernmental and intersectoral response and recovery operations. In the early 70’s wildfires in California brought the implementation of incident command systems. Since then, the landscape has changed considerably at all levels of government. It has, likewise, required changes from emergency management professionals.

For over four decades I have had the privilege of working as an emergency manager in this great country. In every one of my jobs, leadership has been the common denominator for success. Leadership is what leaves a jurisdiction more resilient over time.

...

https://www.riskandresiliencehub.com/meta-leadership-re-defining-emergency-management/

With M&A activity on the rise, a commitment to a strong data management program can help businesses to avoid expensive — and often dangerous — compliance and risk aggregation missteps. Kelvin Dickenson of Opus explains.

In 2018, there were 375 M&A deals valued at over $1 billion in the United States alone. Even in years with the least M&A activity, M&A transactions happen with enough frequency to effect substantial changes that impact numerous credit unions, banks, exchanges, broker-dealers and investment or commodities firms, not to mention many nonfinancial institutions.

Target companies end up with new parent companies, creating new hierarchical structures that can affect thousands of entities. Addresses and other pertinent details change, and headquarters may often be domiciled in altogether different states post-merger.

These and many other vitally important details can quickly shift, leaving once-trusted golden copies of data full of inaccuracies that proliferate incorrect information throughout an institution’s records, resulting in a domino effect of out-of-date data that travels from one information silo to the others.

...

https://www.corporatecomplianceinsights.com/m-and-a-data-maintenance-compliance/

(TNS) — Aftershocks from the recent earthquakes near Ridgecrest, Calif., are decreasing in both frequency and magnitude, and seismologists say they expect the pattern to continue.

The earthquakes on July 4 and 5 — one a magnitude 6.4 and the other a 7.1 — were the strongest to hit the area in 20 years. Thousands of aftershocks have already been reported, and scientists have said they expect thousands more — about 34,000 over the next six months.

But since an initial cluster of magnitude 5 and above quakes that struck in the hours following the 7.1 temblor, the aftershocks have been subsiding in intensity and striking less often, an analysis of seismological data shows.

...

https://www.govtech.com/em/disaster/Seismic-Activity-Slows-After-34000-Aftershocks-in-Southern-California.html

As we’ve reported prior, the majority of legacy Fortune 500 firms are no longer market leaders because their focus remains on protecting their traditional business in this era of digital transformation. In last month’s midwestern US regional Forrester Leadership Board meeting, Stephanie Hammes-Betti, the SVP of innovation design at U.S. Bank, took us through the investments, culture change, and prioritizations she’s helped her firm make to maintain its market leadership.

Her team’s focus has been on proactively recognizing and driving banking, technology, and customer life changes. To achieve these aims. they have a team of over 35 full-time members who work collectively with a much broader set of fans and champions. And per our recommendations, their team is broken down into groups that prototype and drive their innovation efforts, shield their disruptive innovations, and drive culture change to more greatly empower innovation efforts and companywide participation. In support of this last one, Stephanie’s team drives highly recommended education and participation programs detailed in the images below:

...

https://go.forrester.com/blogs/how-100-yr-old-firms-stay-relevant/

Thursday, 11 July 2019 14:58

How 100-Year-Old Firms Stay Relevant

A crisis communication plan ensures employees and other key stakeholders receive timely information for their safety.

Mass notification technology has improved communication with patients, employees, media, and other stakeholders which enriches morale and boosts engagement.

Though this technology exists, hospitals and others in the healthcare industry still face communication challenges that can cause operational, reputational, financial, and strategic risk. Read on to learn the top six communication challenges facing the healthcare industry and how to overcome them.

...

https://www.onsolve.com/blog/the-top-6-communication-challenges-of-the-healthcare-industry-and-how-to-overcome-them/

A commercial vessel suffered a significant malware attack in February, prompting the US Coast Guard to issues an advisory to all shipping companies: Here be malware.
 

In February 2019, a large ship bound for New York City radioed the US Coast Guard warning that the vessel was "experiencing a significant cyber incident impacting their shipboard network." 

The Coast Guard led an incident-response team to investigate the issue and found that malware had infected the ships systems and significantly degraded functionality. Fortunately, essential systems for the control of the vessel were unimpeded.

On July 8, the military branch issued an alert to commercial vessels strongly recommending that they improve their cybersecurity in the wake of the incident, including segmenting shipboard networks, enforcing per-user passwords and roles, installing basic security protections, and patching regularly. 

...

https://www.darkreading.com/vulnerabilities---threats/coast-guard-warns-shipping-firms-of-maritime-cyberattacks/d/d-id/1335198

(TNS) — Glenn Pomeroy, head of the California Earthquake Authority, spent the weekend in Ridgecrest, near the epicenter of Friday’s 7.1-magnitude quake and its day-earlier, 6.4-magnitude baby brother.

“Driving around, you could clearly see there had been a disaster,” he told me. “Streetlights were blinking, stores were closed, the ground was still shaking.”

But the people of Ridgecrest and nearby communities were lucky, Pomeroy said. The damage could have been a lot worse.

“You put a 7.1 under Los Angeles, you’d be looking at a whole different situation,” he said. “We could be looking at billions of dollars in damage.”

...

https://www.govtech.com/em/disaster/Most-Californians-Dont-Have-Earthquake-Insurance-Should-They.html

Microsoft issued fixes for 77 unique vulnerabilities this Patch Tuesday, including two zero-day privilege escalation vulnerabilities seen exploited in the wild.
 

Microsoft today patched 77 vulnerabilities and issued two advisories as part of its July security update. Two of these bugs are under active attack; six were publicly known at the time fixes were released.

Of the CVEs fixed today, 15 were categorized as Critical, 62 were rated Important, and one was ranked Moderate in severity. Patches address vulnerabilities in a range of Microsoft services including Microsoft Windows, Internet Explorer, Office and Office Services and Web Apps, Azure, Azure DevOps, .NET Framework, Visual Studio, SQL Server, ASP.NET, Exchange Server, and Open Source Software.

One of the vulnerabilities under active attack is CVE-2019-1132, a Win32k elevation of privilege flaw that exists when the Win32k component fails to properly handle objects in memory. Successful exploitation could lead to arbitrary code execution in kernel mode, which is normally reserved for trusted OS functions. An attacker would need access to a target system to exploit the bug and elevate privileges.

...

https://www.darkreading.com/risk/microsoft-patches-zero-day-vulnerabilities-under-active-attack/d/d-id/1335197

Has your organization prepared for an insider threat workplace violence scenario?

The most recent shootings at the Virginia Beach Municipal Public Works Building and the Earl Cabell Federal Building in Dallas are a stark, yet painful reminder that security leaders and the public at large must be ever vigilant with the ongoing threat continuum. 12 victims tragically lost their lives in Virginia Beach. The threats associated with workplace violence; in particular, the insider threat is quite concerning. The Virginia Beach shooter, a 15-year employee of the organization, did not appear to be disenfranchised, nor was his social media imprint on anyone’s radar screen. He even resigned from his position via email the morning of the shootings.

In light of the current challenges, the following question comes to mind:

...

https://www.preparedex.com/insider-threat-early-indicators-workplace-violence-during-age-uncertainty/

Just because your data isn't on-premises doesn't mean you're not responsible for security

 

The cloud certainly offers advantages, but as with any large-scale deployment, the cloud can also offer unforeseen challenges. The concept of the cloud just being "someone else's data center" makes me cringe because it assumes you're relinquishing security responsibility because "someone else will take care of it."

Yes, cloud systems, networks, and applications are not physically located within your control, but security responsibility and risk mitigation are. Cloud infrastructure providers allow a great deal of control in terms of how you set up that environment, what you put there, how you protect your data, and how you monitor that environment. Managing risk throughout that environment and providing alignment with your existing security framework is what's most important. 

...

https://www.darkreading.com/perimeter/cloud-security-and-risk-mitigation/a/d-id/1335100

Wednesday, 10 July 2019 14:12

Cloud Security and Risk Mitigation

Does your organization have a strategy for protecting employees at home as a part of your overall cybersecurity program? Something that could include, but really goes to a place that is beyond, awareness training?

If You Answered “No,” You’re Not Alone

Employee privacy is a big reason why not. And yet, as the connected smart home becomes an increasing threat and potential source of compromise for the organization, it’s a question that we all need to think about. That’s why I’m kicking off new research to provide some clarity as to what is realistic to do about this and identify what a holistic approach would look like — one that supports employee privacy. Note: This is part two of a series, the first of which explores the enterprise risks of consumer connected devices, led by my colleague Chris Sherman.

...

https://go.forrester.com/blogs/enterprise-meets-consumer-security-exploring-approaches-to-protect-employees-at-home/

(TNS) — After two mass shootings, it became obvious that the region’s emergency radio system was a failure. The obsolete system imploded both timesand left police officers struggling to communicate as bullet-riddled bodies piled up. Something had to be done, and quickly.

Years later, however, resistance from the public continues to grow. Homeowners across Broward and Palm Beach counties are fighting new communications towers that planners insist are critical to ensuring the public’s safety. The latest battle comes west of Boca Raton.

More than 1,200 people have signed an online petition opposing a 400-foot communications tower that would integrate with Palm Beach County’s new public safety network and smooth communication with multiple agencies, including Broward County.

...

https://www.govtech.com/em/safety/Obsolete-Florida-Radio-System-Imploded-During-Two-Mass-Shootings.html

Every year businesses temporarily shut down – or close forever – because of a disaster.  The 6.4- and 7.1-magnitude earthquakes which struck near Ridgecrest, California late last week are stark reminders of a quake’s ability to disrupt a business’ operations. Even though many California businesses are in seismically active areas, only one out of 10 commercial buildings are insured for quakes, according to the California Department of Insurance.

Depending upon a business’ location, the threat to its operations may come from risks other than earthquakes, such as a hurricane, tornado, or wildfire. Forty-plus percent of U.S. small businesses do not reopen after a disaster impacts them, the Federal Emergency Management Agency (FEMA) estimates. But by taking measures to prepare, businesses can increase their chance of recovering financially from a disaster.

Steps businesses can take in the aftermath of this month’s southern California earthquakes:

...

http://www.iii.org/insuranceindustryblog/got-earthquake-insurance-businesses-should-asses-their-readiness-for-the-big-one-and-other-natural-disasters/

Every company should be expecting a security breach at some point. MetricStream’s Vibhav Agarwal discusses the importance of tackling cybersecurity directly and what risk-focused executives must do to avoid disaster and position their organizations for success.

In a world where organizations are rapidly digitally transforming, cybersecurity has clearly become a business-critical issue. Every firm has unique data that offers it a strategic, competitive advantage – but in the event of a security breach, that data can quickly be compromised. Here’s what businesses can do to avert disaster.

Plan Well and Execute

Companies need to realize that the velocity and sophistication of online attacks has vastly increased, so they must adapt to survive in the modern world. The traditional method of developing and evaluating a strategy over the long term is no longer enough.

Preventing data breaches needs to be a top priority for all firms in the age of GDPR. Organizations must utilize real-time assessments that continuously secure critical assets and information – companies are constantly being attacked, even if they don’t realize it.

...

https://www.corporatecomplianceinsights.com/cybersecurity-issues-cyber-strategies/

Consumer connected devices are presenting increasingly attractive targets to cybercriminals, putting home networks and potentially enterprise assets at risk. In just the last two weeks, we’ve seen Samsung indicate that antimalware should be used on its “connected,” or smart, TVs (almost all TVs are connected these days — just try to find a nonconnected TV next time you are in a large retail store). Days later, Forbes announced a data breach involving the exposure of 2 billion records related to smart-home devices. As these devices proliferate among your employees and even in the corporate network, new risks and potential exploits need to be accounted for.

Unfortunately, there is little dialogue among security leaders today regarding how the expanding home networks of their employees affects an organization’s overall security posture. It’s understandable that security professionals wouldn’t want to focus on personal devices in their homes, as employee privacy is just as big of a concern for any organization. However, as bring-your-own-device and work-from-home policies become more ubiquitous, a larger number of devices that are connected to the company network will also invariably connect to the home or car networks maintained by their employees.

...

https://go.forrester.com/blogs/uncovering-the-enterprise-risks-posed-by-consumer-connected-devices/

People in business continuity talk a lot about black swans: unexpected events that come from outside normal experience and have strongly negative effects.

Black swan events are definitely worth thinking about and being prepared for due to their potentially catastrophic impacts.

However, in today’s post, we are going to talk about the opposite of black swan events. Some people refer to such events as white swans, but I am going to refer to them as “8 Bad Things That Are Likely to Happen This Week.”

...

https://www.mha-it.com/2019/07/03/common-business-continuity-threats/

Emerging technologies are complicating compliance for financial services firms. Smarsh’s Robert Cruz, an expert on information governance and regulatory compliance, shares some of the key the challenges they face, as well as a path forward.

For financial firms to stay compliant, they need to meet all the books-and-records and supervisory mandates required by FINRA and the SEC. But the ever-expanding variety of emerging technologies continues to raise the bar for compliance oversight.

For instance, financial crimes can now be masked by conversations that purposely jump across communications platforms, also referred to as “channel hopping.” Likewise, the use of emojis are increasingly carriers of sentiment or emotion, which can easily go undetected with today’s compliance tools. Let’s face it, not many lexicon-based systems will recognize a combination of two chickens, a cowboy hat and a palm tree as a financial risk.

...

https://www.corporatecomplianceinsights.com/financial-firms-digital-communications-risk/

(TNS) - For the second consecutive day, a major earthquake shook Southern California and was felt far beyond, stopping the NBA’s Summer League games in Las Vegas, forcing the evacuation of rides at Disneyland in Anaheim, and reminding residents that the state is always on unstable ground and destined for more.

It registered a magnitude of 7.1 on Friday night and was the strongest earthquake to hit the state in two decades, causing fires in the small town of Ridgecrest (population 29,000), and sending shock waves felt more than 300 miles away in all directions – Sacramento, Phoenix, Mexico.

According to the U.S. Geological Survey, the quake hit at 8:19 p.m., 11 miles north-northeast of Ridgecrest, near where a magnitude 6.4 quake hit Thursday morning, but was more shallow. It was followed by nearly 100 aftershocks, some of which were magnitude 4.0 or higher across the Searles Valley, an area straddling Inyo, Kern and San Bernardino counties.

...

https://www.govtech.com/em/disaster/-No-Deaths-Reported-After-71-Earthquake-in-Southern-California.html

 

This three-part blog addresses some of the human concerns of recovery:

Part 1 What: What are Human Concerns

Part 2 So What: The Risks 

Part 3 Now What: The Applications 

Now What: Applications for Meaning and Resiliency

Resiliency doesn’t mean you “bounce back” to your old original shape after a crisis or challenge. There is no bounce backwards. The past is past. Resiliency means moving forward in order to reclaim balance with new meaning. Balance can be lost if focus is only on one aspect of life – physical, emotional, spiritual or mental. The return to “balance” after a major disaster is the essence of recovery work. Body, Mind, Spirit, and Emotions need daily care and maintenance to stay in balance, remain stable, and support sustainable and healthy responses to challenge. These prior Self-Care practices enhance resiliency to meet life’s challenges when they arrive.

...

https://www.riskandresiliencehub.com/human-concerns-in-disaster-recovery-part-3/

Human vulnerability presents a real threat for organizations. But it's also a remarkable opportunity to turn employees into our strongest cyber warriors.

 

Employee awareness has become a critical necessity for modern organizational security. While the human factor has always presented an "inside threat" for companies, it’s fast-growing: The more social, hyperconnected, fast-paced our culture becomes, the greater are the risks employees bring into the organizational cybernetic space.

Worse, no matter how robust today's cyber defense systems are, it seems that attackers always remain one step ahead. With vast data publicly available on any employee, "bad guys" easily gather and utilize personal information to target specific employee groups. These sophisticated tactics instantly expose employees' vulnerabilities and turn them into human weapons, which in some recent global cyberattacks have had a destructive impact on the entire organization.

...

https://www.darkreading.com/perimeter/disarming-employee-weaponization/a/d-id/1335076

Monday, 08 July 2019 13:53

Disarming Employee Weaponization

By BRITT LEWIS

Senior Vice President, Direct Sales and Business Development, Inmarsat Government Inc.

Seeing a disaster unfold on television or online triggers many emotions. It can, on occasions, be very difficult to watch the images being broadcast. Yet to arrive in a devastated region in-person as a first responder? The impact can be beyond description. They are surrounded by victims who need medical attention, need food and water, and are desperate to find and connect with their loved ones.

In providing relief, first responders must focus on these victims, without worrying about whether they are able to communicate with commanders at another site or send damage-related video and data to them. Nor should they be expected to have a detailed mastery of how a communication system works. The mission is about assistance and relief, not connectivity set-up. However, in any disaster, reliable communications is of critical importance.

In normal circumstances, we consider cell phone coverage as ubiquitous – a given. Yet, that is not always the case at a disaster scene, where commercial networks may be overloaded or sustain damage. Access to reliable, easy-to-install-and-operate communications amid such circumstances can, on first examination, appear very difficult to achieve.

But FirstNet, America’s dedicated public safety broadband communications platform, is changing that. It’s being built with AT&T in a public-private partnership with the First Responder Network Authority – an independent government authority. Since its launch, FirstNet has been reliably supporting public safety’s response of emergency and everyday situations. Public safety agencies used FirstNet during last year’s wildfires and hurricanes as well as tornadoes and flooding events this year. And FirstNet has stood up to the challenge, keeping first responders connected and enabling them to communicate when other systems went down.

Satellite communications (SATCOM) are a critical part of the FirstNet communications portfolio, helping to deliver the capabilities that “First In/Last Out” responders depend upon in hard-hit disaster areas. Inmarsat Government is proud to be part of the core team AT&T selected to help deliver the FirstNet communication ecosystem, bringing resilient, highly secure SATCOM capabilities for our country’s first responders.

The FirstNet ecosystem strengthens public safety communications, enabling coordination more quickly and effectively in disasters and emergencies. FirstNet users can leverage narrowband and wideband SATCOM solutions, which have been a trusted, reliable choice for public safety agencies’ mission-critical communication needs for nearly half a century and should be part of any disaster response/Continuity of Operations (COOP) planning. Unlike traditional wireline or cellular wireless systems, SATCOM uses satellites to “bounce” voice or data signals to or from a remote user through the sky and back to one or more geographically resilient downlink facilities (“earth stations”), which are connected to the global communication backbone networks. This resiliency enables communications virtually anywhere, as users have a “line-of-sight” path through the air to the satellite.

Through SATCOM solutions, users acquire instant voice, data and video services, using equipment that is often as simple and easy to use as a cell phone, and small and light enough to store in a backpack. These solutions are often embedded in communication systems. SATCOM solutions have proven themselves – over and over again – as irreplaceable in delivering the following, unique capabilities anywhere in the world:

Augmented, constant connectivity. In assessing damage and casualties, responders must connect to the command and control center as well as restore communications for the local community. This requires high bandwidth availability for seamless voice, data, image and video transmissions for a variety of applications. With SATCOM, those running the command and control center operations, for example, can dynamically allocate voice and data resources to where they are needed and to do so in real time. They transfer live video streams from affected areas back to the center so that the command and control center can observe and advise.

SATCOM offers a sound option to first responders. It is a dependable – and often the only option – that augments “terrestrial” (LTE cellular or wireline) communications for enhanced, robust connectivity.

Highly reliable coverage. SATCOM offers ubiquitous satellite coverage no matter where first responders go. SATCOM services use satellites to reach any location on the planet. As such, satellite-based connectivity is unaffected by disasters/emergencies which may destroy local tower infrastructure and is accessible in the most remote or rural areas.

Flexible solutions. Solutions available to FirstNet users range from satellite phones for individual users to portable or vehicle-mounted solutions and fixed satellite capabilities. These fulfill a variety of public safety use case scenarios for SATCOM in remote areas, such as providing law enforcement officers, firefighters or emergency medical technicians (EMT) who operate in remote areas with a satellite phone for highly reliable voice services for emergencies. In addition, the solutions can equip first responder vehicles with dual LTE/SATCOM terminals to maintain constant or on-demand voice/data communications in rural areas. Boats or other maritime craft can be equipped with SATCOM units for operations offshore or over bodies of water where cellular coverage does not exist.

Easy setup/deployment. As indicated, public safety organizations and “First In/Last Out” responder units must focus on the mission at hand. SATCOM allows them to meet their immediate, key objectives through capabilities which involve minimum installation time; users are up and running within a few minutes. For example, first responders in disaster-prone areas often depend upon rapidly deployable “SATCOM go kits,” using satellite phones and/or SATCOM broadband terminals they can easily set up. They can deploy these man-portable or broadband voice/data satellite kits in under 10 minutes, to establish incident command outposts in remote areas for voice, video conferencing and data. By default, the kits link to LTE/cellular networks to create hotspots. Yet, they automatically switchover to satellite global broadband networks anytime local networks are unavailable. This means first responders stay connected during floods, power outages, forest fires and more, regardless of their location and situation.

On-the-move and on-the-pause responders depend upon Vehicular Network Solutions (VNS) which combine LTE and satellite for true “go anywhere” vehicle connectivity. Built specifically for FirstNet users, VNS utilizes cellular or satellite backhaul for in-vehicle communications and/or extend a Wi-Fi “bubble” of connectivity to a small number of users outside the vehicle. It combines an off-the-shelf In-Vehicle Router system with multiple communication input capability and the ability to intelligently select among connectivity paths.

This brings uninterrupted voice and data capability during the “first 60 minutes” after a disaster strikes, which could stretch to days and even weeks as recovery efforts continue, enabling essential communications and information sharing. This is when first responders turn to very small aperture terminals (VSAT), such as Inmarsat Global Xpress, to meet the expanding and increasing needs of their mission. Global Xpress is the first and only end-to-end commercial Ka-band network from a single operator available today. Only a Global Xpress terminal and standard monthly subscription are required to connect anywhere in the world at any time, and then transmit and receive large data, such as that from high-speed internet and video streaming. From the moment the transit case is opened, connectivity can be established in under seven minutes with minimal operator interaction. Once online, committed information rates with 99.5% availability pave the way for mission success. Customers also have single-source access to a U.S.-based network operations center that is certified and cleared and available 24/7/365 with just one phone call.

From the arrival of the first responders within hours of the crisis and then up to days later once the emergency response and relief mission expand, SATCOM has proven to be there when commercial infrastructure and mobile phone networks may be overloaded, damaged or non-existent. It helps ensure a “First In/Last Out” presence delivering immediate access that is easy-to-install and operate, with “anytime/anywhere” connectivity – until the mission is completed. Via FirstNet, SATCOM allows them to meet their immediate, key objectives through capabilities that help ensure connectivity no matter where they are, or what circumstances they face. Because these capabilities are highly secure and easy to set up – with readily available support at all times – responders may now perceive of high-bandwidth communication access as a given. With this, they can focus entirely on the task at hand, in providing support to the victims and communities they serve.

Russian-speaking group has sent thousands of emails containing new malware to individuals working at financial institutions in the US, United Arab Emirates, and Singapore.

Russian-speaking threat group TA505 has begun targeting individuals working at financial institutions in the US, United Arab Emirates, and Singapore with new malware for downloading a dangerous remote access Trojan (RAT).

Researchers at Proofpoint say they have observed TA505 sending out tens of thousands of emails containing the downloader — dubbed "AndroMut" — to users in the three countries. The group is also targeting users in South Korea in a separate but similar campaign.

In both campaigns, TA505 is using AndroMut to download "FlawedAmmyy," a full-featured RAT that allows the attackers to gain administrative control of an infected device to monitor user activity, profile the system, and steal credentials and other data from it.

...

https://www.darkreading.com/attacks-breaches/ta505-group-launches-new-targeted-attacks/d/d-id/1335136

Wednesday, 03 July 2019 14:24

TA505 Group Launches New Targeted Attacks

Employers must make clear to their employees what compliance topics and policies are essential to the organization. As Skillsoft’s John Arendes explains, it’s to the employer’s benefit to help employees deal with information overload.

The workplace continues to evolve over time; from the impact of startup cultures to new technology, it is not the same place as my first investment banking job out of college in 1989. On my first day, I received the 10­-page employee handbook, in hard copy, that could easily be carried around in my work bag. My manager explained to the group that all the sections were of importance, but the most important section to know was the travel and expense section.

That statement alone shows how much times have changed. Today, corporate policy documents are hundreds of pages, and employees need to understand a host of various policies, many of which can be found in a company’s code of conduct. Given the significant increase in paperwork and policies in today’s climate, companies need a system that allows them to distribute, track and control policy versions – and this all needs to flow seamlessly.

...

https://www.corporatecomplianceinsights.com/mitigate-risk-company-policies/

A wave of new MacOS malware over the past month includes a zero-day exploit and other attack code.

A wave of malware targeting MacOS over the past month has raised the profile of the operating system once advertised as much safer than Windows. The newest attack code for the Mac includes three pieces of malware found in June — a zero-day exploit, a package that includes sophisticated anti-detection and obfuscation routines, and a family of malware that uses the Safari browser as an attack surface.

The zero-day exploit, dubbed OSX/Linker by researchers at Intego who discovered it, takes advantage of a vulnerability in MacOS Gatekeeper — the MacOS function that enforces code-signing and has the ability to limit program execution to properly sign code from trusted publishers.

...

https://www.darkreading.com/attacks-breaches/new-macos-malware-discovered-/d/d-id/1335135

Wednesday, 03 July 2019 14:21

New MacOS Malware Discovered

Internal audit must know how to respond when business process owners want to go faster and document less (such as in Agile environments). Nielsen’s Kevin Alvero and Wade Cassels discuss what IA can do to meet these seemingly contradictory goals.

In the five months between the crashes of the Boeing 737 Max 8 airplanes in Indonesia and Ethiopia that resulted in the deaths of 189 people and 157 people respectively, Boeing received multiple complaints from pilots about the Max 8’s autopilot system, according to an NBC News report.

Several of those complaints mentioned insufficient documentation, with one even referring to the aircraft’s manual as “criminally insufficient.” A CBC report suggested that details about the Max 8’s MCAS computer system, which was the focus of the investigation into both crashes, was at one time included in the Max 8’s manual, but left out of the final draft. Meanwhile, an investigation was also opened into the process by which the Federal Aviation Administration (FAA) had certified the Max’s flight control system.

...

https://www.corporatecomplianceinsights.com/audit-business-process-documentation/

As nation-states and rogue actors increasingly probe critical infrastructure, policy and technology experts worry that satellite and space systems are on the front lines.

Information from satellites fuel a great deal of today's technology, from the intelligence gathering conducted by nation-states, to the global positioning system used for vehicle navigation, to the targeting used by "smart" weapons. 

Little surprise, then, that cybersecurity and policy experts worry that the relative insecurity of satellite systems open them to attack. In a paper released by The Royal Institute of International Affairs at the non-profit think-tank Chatham House, Beyza Unal, a senior research fellow in international security, warned that the reliance of space-based systems and satellites on civilian infrastructure means greater vulnerability to attack in times of conflict and espionage in times of peace. 

...

https://www.darkreading.com/attacks-breaches/cybersecurity-experts-worry-about-satellite-and-space-systems/d/d-id/1335131

(TNS) — A scholarship of $2,000 for a first responder in eastern North Carolina is available through WGU North Carolina and the N.C. Association of Fire Chiefs.

The Eastern North Carolina First Responders Scholarship is available to an emergency medical technician, firefighter or police officer working east of Interstate 95. The tuition credit of $500 per six-month term is renewable for up to four terms.

Deadline to apply is Sept. 15. More information is available at wgu.edu/ENC.

...

https://www.govtech.com/em/safety/2000-First-Responder-Scholarship-Available-in-North-Carolina.html

How writing patterns, online activities, and other unintentional identifiers can be used in cyber offense and defense.
 

As we move throughout our digital lives, we unknowingly leave traces — writing styles, cultural references, behavioral signatures — that can be compiled to form a profile of our online personas.

These identifiers are different from physical identifiers such as fingerprints, faces, handwriting, DNA, and voice, all of which allow law enforcement to trace crimes back to offenders and enable biometric authentication tools. But physical identifiers are often irrelevant when it comes to tracking criminals in the digital realm, where non-physical traits can prove useful.

...

https://www.darkreading.com/threat-intelligence/human-side-channels-behavioral-traces-we-leave-behind/d/d-id/1335129

The emergency management profession has grown and evolved in recent years, and the University of Central Florida’s (UCF) emergency management and homeland security programs have evolved right along with it.

The university first began offering a minor in emergency management and homeland security in 2003, then added a graduate certificate, and just last year added bachelor’s and master’s degree programs.

With a dedicated internship program and rigorous preparation for graduates of the programs, former students now dot the landscape of emergency management professionals. Former students have gone on to work at the American Red Cross, Lockheed Martin, the Orlando International Airport and local government emergency management offices.

...

https://www.govtech.com/em/preparedness/UCF-Emergency-Management-Degree-Programs-Evolve-with-the-Times.html

Staying ahead can feel impossible, but understanding that perfection is impossible can free you to make decisions about managing risk.
 

Every few years, there is a significant and often unexpected shift in the tactics that online criminals use to exploit us for profit. In the early 2000s, criminals ran roughshod through people's computers by exploiting simple buffer overflows and scripting flaws in email clients and using SQL injection attacks. That evolved into drive-by downloads through flaws in browsers and their clunky plug-ins. Late in the decade, criminals began employing social components, initially offering up fake antivirus products and then impersonating law enforcement agencies to trick us into paying imaginary fines and tickets. In 2013, someone got the bright idea to recycle an old trick at mass scale: ransomware.

...

https://www.darkreading.com/vulnerabilities---threats/in-cybercrimes-evolution-active-automated-attacks-are-the-latest-fad/a/d-id/1335073

When is the last time your organization conducted a cyber security tabletop exercise?

Cyber security teams are busy monitoring and responding to attacks against your organization’s information technology infrastructure. Should you still be conducting tabletop exercises? The answer is yes, of course you should. Although some teams seem to be in response mode on a regular basis, its still imperative that the cross-functional, wider response team works closely with the cyber response teams in order to coordinate response efforts. There are very important crisis response activities that the cross-functional crisis management team can start to prepare for early on in the unfolding situation. Even if the situation turns out not to be a crisis, activating and preparing the crisis management team has more pluses than minuses.

...

https://www.preparedex.com/4-essential-cyber-security-tabletop-exercise-tips/

When the only certainty is uncertainty, the IEC and ISO ‘risk management toolbox’ helps organizations to keep ahead of threats that could be detrimental to their success. 

All businesses face threats on an ongoing basis, ranging from unpredictable political landscapes to rapidly evolving technology and competitive disruption. IEC and ISO have developed a toolbox of risk management standards to help businesses prepare, respond and recover more efficiently. It includes a newly updated standard on risk assessment techniques.

IEC 31010, Risk management — Risk assessment techniques, features a range of techniques to identify and understand risk. It has been updated to expand its range of applications and to add more detail than ever before. It complements ISO 31000, Risk management.

...

https://www.iso.org/news/ref2403.html

The first-ever PwC’s Global Crisis Survey was released this year with insights into over 4,000 crises that occurred at 2,000 companies.

The survey comes at an ideal time as crisis management conversations abound and techniques for mitigation evolve. The insights can give companies a greater understanding of how crisis proliferates, advances, and what other companies have faced when it hit them.

Here are four thought-provoking takeaways from the survey that may cause you to look at crisis management differently.

...

https://www.onsolve.com/blog/4-takeaways-from-the-first-pwc-global-crisis-survey/

Recently, a relative of mine required surgery, which led to my spending long periods of time in a hospital. As a result of that experience, I developed the content of today’s blog: a list of nine ways we can make hospital BCM programs healthier.

They say that experience is the best teacher.

That was definitely true for me over the past few weeks when the illness of a close family member required me to spend long stretches of time in one of our city’s hospitals.

...

https://bcmmetrics.com/healthier-hospital-bcm-programs/

Late on the evening of May 4, 1988, the First Interstate Bancorp building in California caught fire. Luckily, the fire was put out quickly and only two of the employees working at the time were hurt, and their injuries were minor. Looking back, things could have been much worse. Our offices were damaged though, and not safe for us to be in.

So, what do you do when you have a thousand or so employees and no place for them to work?

We managed to find a temporary workspace in the downtown Los Angeles corridor, but had we been better prepared, we wouldn’t have had such a last-minute scramble. Questions quickly arose among our staff about pay, transportation, hours to be worked, when we would be able to return to our building, and so on. And we realized, ashamedly, we did not have the answers.

Our human resources directors quickly became aware we had no relative HR policies to fall back on. There was nothing in the disaster preparedness plan to guide us. We were like the proverbial dog chasing its tail.

...

https://www.riskandresiliencehub.com/how-to-develop-a-contingent-hr-policy-for-disasters/

More than 4.5 million American households are at “high or extreme risk from wildfire,” says the Leavitt Group, an insurance brokerage firm. Which states are the most wildfire prone? Topping the list are Arizona, California, Colorado, Idaho, Nevada, New Mexico, Oregon, Texas, Utah and Washington.

The Leavitt Group also identifies the three leading wildfire risk factors as fuel (grass, trees and dense brush), slope (steep slopes that can increase wildfire speed and intensity) and access (dead-end roads that interfere with fire-fighting equipment). Persistent drought conditions in certain regions compound wildfire risk.

...

https://www.onsolve.com/blog/wildfire-season-is-here-prepare-your-community-with-mass-notification/

Could the world’s most congested cities ease commuters’ woes with flexible working?

40% of people cite their commute as the worst part of their day. On public transport, travellers often experience crowded conditions, stress, discomfort, disruption, delay, feelings of time being ‘wasted’ and to top it off, their wallets are hit. While the cost of season tickets goes up, comfort levels are in decline, as more and more people are flocking into city centres at the same time to reach their offices. What they’re lacking, is a little flexibility.

According to a 2018 report by Inrix, the world leader in mobility analytics, the top 10 most congested cities in the world include LA, New York, Sao Paulo, London, Paris and Moscow. A separate study by navigation company TomTom, cited Mexico City, Bangkok, Jakarta and three Chinese cities: Chongqing, Beijing and Chengdu among the top contenders in the congestion test. Wherever there is an economic hub, congestion follows and frustration among commuters along with it.

...

https://www.regus.com/work-us/tackling-commuter-congestion/

Monday, 01 July 2019 19:58

Tackling Commuter Congestion

(TNS) - Enid Police Department's 911 dispatchers are getting more accurate locations for cellphone users calling 911 via the RapidSOS software used in the center.

Lt. Warren Wilson said the software is able to pinpoint callers using iPhones with iOS 12+ and Andriod phones version 4.0+, decreasing response times for emergency personnel dispatched to 911 calls.

Wilson said last year, the 911 center received about 19,000 calls from those using cellphones, compared to about 7,000 who called 911 using a landline. He said there is a trend for increasing amounts of calls from cellphone users and fewer calls from land lines.

...

https://www.govtech.com/em/safety/New-Location-Software-Makes-Finding-911-Cellphone-Callers-Easier-.html

This blog is a summary of an interesting internal discussion we had among analysts. I’d like to extend my thanks to  Jessica LiuMartha BennettFatemeh KhatiblooBrigitte MajewskiSucharita Kodali, and Benjamin Ensor who all helped with the thinking. I’m merely putting the pieces together for you here, because connecting the dots is where I come in at Forrester . . .

As I ramp up an effort to refresh our report on top technology trends to watch, one of the things I find most interesting is how technologies build upon and accelerate each other (see the law of accelerating returns). For one thing, we have to wrestle as a society with a number of moral dilemmas that I consider part of digital ethics. Facebook, Inc. (inclusive of the Facebook app, Messenger, Instagram, WhatsApp, Facebook’s Audience Network, and its other apps, services, and hardware) is the best example. It has become a new world superpower; but instead of nukes, it combines technologies in order to accelerate disruption and expand its influence at a scale we simply can’t grasp. This is forcing us to pay attention to digital ethics and wonder what the new reality that Facebook is helping create means to businesses and consumers.

...

https://go.forrester.com/blogs/facebooks-recent-moves-highlight-the-grand-challenge-of-digital-ethics/

Archived data great for training and planning

By Glen Denny, Baron Services, Inc.

public safety historical weather dataHistorical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred. 

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

The optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

https://www.virtual-corp.com/business-continuity/table-top-exercise-revelations/

 

By Bob Farkas, PMP, AMBCI, SCRA

One of the most useful, insightful, and entertaining business continuity activities is table top exercises. These are generally well known to Business Continuity practitioners as an important step in emergency preparedness and disaster recovery planning. Table top exercises often involve key personnel discussing simulated scenarios, where their roles play a part, and how to respond in emergency situations. In this article, I will present a real example that can illustrate the type of useful information that can be obtained from an exercise. Moreover, an exercise scenario does not have to be complicated to provide value. To quote Leonardo Da Vinci, “Simplicity is the ultimate sophistication.”

Setting the Stage 

Recently, a West coast high tech firm requested assistance from Virtual Corporation with implementing a business resiliency program throughout the enterprise. Each department that was deemed in-scope completed a Business Impact Analysis (BIA) and developed its initial Business Continuity Plan. If the department Recovery Time Objective (RTO) was 24 hours or less, it would conclude its business continuity planning activities with a table top exercise.

One of the firm’s divisions located in the United Kingdom fell into the category that needed to complete a table top exercise. The exercise included participants from three critical departments which provide security monitoring services and support for their commercial clients. The local business continuity lead determined a building fire would be the appropriate scenario for the exercise.

The Dilemma

Once the exercise began, participants described their initial actions in responding to the building evacuation announcement. They pointed out that the company’s safety and evacuation procedures require that laptops be left behind at the employees’ work areas to facilitate and ensure everyone’s swift and safe evacuation from the building during a potential fire or other disruptive event.  The scenario was advanced to where the fire had been extinguished and the Fire Marshal declared the building unsafe to occupy. At this point, the participants in the exercise indicated management would instruct employees to go home. A major issue quickly became apparent. They would be unable to work remotely since their laptops remained in the building that they could no longer access. The short-term solution was to use their mobile phones to hand off work to other locations and manage work as best as possible with their mobile phones until their laptops were replaced.

During the discussions that followed, the question arose as to how quickly replacement laptops could be provisioned. Not soon enough it turned out; the company did not have a local (UK) IT service center. Laptops are supplied from the company’s facility in Dublin, Ireland. This led to a list of other issues and questions that needed to be addressed such as machine inventory, availability of pre-imaged machines, prioritization of need, expedited delivery and identifying alternate, local sources. The real magnitude and impact to these departments’ abilities to continue work was not fully considered until this exercise brought these issues to the forefront.

Exercises also challenge common assumptions and beliefs. In the building fire exercise scenario, virtually everyone’s initial reaction to not being able to work from their impacted location was that they would work remotely/from home without carefully considering the implications of that decision. In the building fire scenario described above, no one thought they’d be without a laptop until reminded that their laptops could not be retrieved. Raising such issues during the exercise, and thus one of the benefits of an exercise, is to force people to consider the situation more carefully and think through other alternative recovery options such as relocating to another facility (with available computers) or mitigations such as having a local laptop supplier.

Conclusion

Much can be learned from table top exercises as illustrated by this example. It is a valuable training and planning tool to improve responsiveness and organizational resiliency. However, such benefits can only be realized if exercises are done regularly and the lessons learned are applied. Similar to how regular physical exercise can benefit one’s personal well-being, table top and other business continuity exercises can also benefit an enterprise’s resiliency well-being. Therefore, exercise often.

About the Writer

Bob Farkas, PMP, AMBCI, SCRA
Manager, Project Management Office/Project Manager

Bob has been with Virtual Corporation since 2001 during which he has led many Business Impact Analysis (BIA), Business Continuity Planning, and Risk Assessments projects across health care, manufacturing, government, technology and other services industries. In addition, he has been instrumental in building and refining Virtual’s processes and toolkit bringing new approaches and insights to client engagements. His career spans materials engineering, programming, telecom marketing research, IT outsourcing and business continuity. Bob holds PMP, AMBCI and SCRA certifications and has a Master’s in Chemical Engineering from the New Jersey Institute of Technology and Bachelor’s in Metallurgical Engineering from McMaster University (Hamilton, Ontario.)

Monday, 01 July 2019 15:32

Table Top Exercise Revelations

In 2019, drones have definitely become an integral part of the public safety landscape and to some, drones are now considered mission critical. Flashing back to 2015, drones had shown some progress but were still mired in strict Federal Aviation Administration (FAA) regulations. The use of drones was limited to early adopters, many of which already had aviation divisions for manned aircraft.

In 2016, the most important factor to enhance adoption was on the regulatory side when the FAA published new rules for both public agency flight (Certificate of Authorization or COA) and commercial flight (Part 107). Even after these regulatory changes, drones did not really start expanding throughout public safety until the catastrophic hurricanes in 2017. In addition to the hurricanes, other large natural disasters such as floods, tornados, wildland fires, earthquakes and volcanic eruptions would unequivocally prove the value of public safety drones.

In 2018, the Bard Center for the Study of the Drone published a report that over 900 public safety agencies in the United States had received an approved COA from the FAA. This report did not capture the entire data as there was no easy way to ascertain public safety agencies operating solely under Part 107 rules. For example, the Bard study had identified 26 public safety agencies that had COAs in Virginia. After additional research, I identified 26 additional agencies operating under Part 107. Based on Virginia’s data, it is clear that the reference to 900 COAs (while correct) is a very modest number of public safety agencies with drone programs nationally (COA and Part 107). Additionally, Virginia saw a growth from two public safety agencies with drone programs in 2016 to nearly 50 in 2018. A similar trend is occurring globally, and public safety is implementing drone programs at a much faster pace. This global trend is also illustrated by the many examples of growing European public safety agencies with drone programs and by the formation of the European Emergency Number Association Public Safety UAS Committee and the International Emergency Drone Organisation.

...

https://www.firehouse.com/tech-comm/drones/article/21079277/fire-technology-public-safety-drone-update

Many businesses are embarking on digital transformation initiatives that will put technology at the core of business value creation. At the same time, many of these same businesses are seeking to reduce or eliminate the cost of managing IT infrastructure. Storage vendors are addressing these seemingly incompatible goals by investing in new storage management capabilities including unified management, automation, predictive analytics, and proactive support.

iXsystems already offered API-based integration into automation frameworks and proactive support for TrueNAS. Now iXsystems has released TrueCommand to bring the benefits of unified storage management with predictive analytics to owners of its ZFS-based TrueNAS and FreeNAS arrays.

...

https://www.dcig.com/2019/06/truecommand-brings-unified-management-and-predictive-analytics-to-zfs-storage.html

(TNS) — In a state where rising sea levels are threatening oceanfront property, building $20 billion worth of seawalls would protect South Carolina’s coast from the effects of climate change during the next 20 years, a new study says.

The report by the Center for Climate Integrity says South Carolina needs 3,202 miles of seawalls — enough to cover the entire coast — to shield its beaches, marshes and tidal rivers from sea-level rise. The $20 billion cost estimate was based on a six-inch rise in sea level and a 21-inch storm surge, climate center officials said.

Nationally, the United States needs more than 50,000 miles of new seawalls, which would cost $416 billion, to protect the coastline from sea level rise, according to the climate organization, which supports aggressive efforts to halt climate change.

...

https://www.govtech.com/em/preparedness/20B-Needed-to-Shield-South-Carolina-from-Rising-Sea-Levels.html

The traditional ‘Disk-to-Disk-to-Tape’ (D2D2T) backup model is no longer adequate for always-on enterprises requiring rapid restore data and is being replaced by Flash-to-Flash-to-Cloud (F2F2C) says Peter Gadd…

In a D2D2T model, the primary disk creates a local backup on the secondary (backup) disk or on a Purpose-Built Backup Appliance (PBBA). This is typically followed by backup to tape media, which will be moved offsite or replicated to another PBBA offsite for disaster recovery purposes. This results in multiple backup silos and an expensive dedicated infrastructure for backup only. Added to this, is the fragile data durability on tape and additional hardware required for tape backup. From a functional point of view, the backup data on tape is also ‘locked up’ offsite in silos, so it cannot  provide any additional value, unless it is recalled.

There are two major flaws in this approach, particularly with regard to PBBAs. Firstly, they are inflexible. Typically, PBBAs are designed to copy data in one direction from source to destination and store that data through deduplication. In the meantime, the rest of the IT industry are rapidly adopting agile and elastic solutions. Secondly, PBBAs often deliver poor restoration performance, because the backup data is fragmented across both disk and tape.

These two problems lead to a slow and complex data infrastructure. Because recovering data from the backup appliance takes a long time, IT leaders may opt to deploy additional storage infrastructure to support other workloads, such as the Test/Dev environment. Snapshots of the production applications and databases are often also backed up to separate storage systems to enable faster recovery when needed. Nevertheless, disaster recovery does not guarantee that data can be recovered quickly and reliably.

...

https://www.continuitycentral.com/index.php/news/technology/4124-flash-to-flash-to-cloud-is-on-the-way-to-becoming-the-standard-solution-for-backup

While cybersecurity discussions have permeated board meetings, the democratization of accountability has a long way to go.
 

A spate of recent surveys offer indications that the philosophy that "cybersecurity is everyone's responsibility" is gaining steam in the C-suite at most large organizations. But digging into the numbers — and keeping in mind perennially abysmal breach statistics — it's clear that while awareness has broadened across the board room, accountability and action are still spread pretty thin.

report released this week by Radware shows promising signs that cybersecurity is increasingly coming up in board talks and is near-universally viewed as the entire C-suite's responsibility to enable. Conducted among 260 C-suite executives worldwide, the study shows that more than 70% of organizations touch on cybersecurity as a discussion item at every board meeting. Meantime, 98% of all members across the C-suite say they have some management responsibility for cybersecurity.  

...

https://www.darkreading.com/risk/cybersecurity-accountability-spread-thin-in-the-c-suite/d/d-id/1335015

While for many organizations major incidents are relatively rare events, when they occur they create personal as well as organizational resilience challenges. Dominic Irvine looks at how using the power of imagination can help…

Being resilient means being able to deal with an event that otherwise could have been traumatic. Coping is resilience over time. Resilience is also about our ability to learn from these experiences such that what was once threatening is no longer so. Just as the novice kayaker is intimidated by white water, over time they learn how to tackle rapids that once scared them.

Choosing to respond in a way that means the experience is not traumatic is a function of our imagination. We have to conceive new possibilities. The skier fearful of hitting the tree in the middle of the slope has to imagine a route to the bottom of the slope that means they miss the tree. Whereas if they continue to focus on the tree for fear of hitting it, then that’s probably what’s going to happen.

Do not underestimate the power of imagination. Pascual-Leone and colleagues (1995) showed that ‘thinking is movement confined to the brain’. They had participants in a study think about doing five finger piano exercises, a second group do five finger piano exercises and a third group were the control who did nothing. There were almost identical changes in the brain between those who had imagined doing the piano exercises and those that did the piano exercises. The starting point for resilience then is thinking about how you would like things to be in order to begin to create the possibility. That means you need to think differently about the outcome.

...

https://www.continuitycentral.com/index.php/news/resilience-news/4123-using-the-power-of-imagination-to-improve-personal-resilience-during-challenging-situations

Regions of the US have been hit hard in recent weeks with damaging tornadoes and floodwaters

 

In recent years, over $6.7 billion in damages have been incurred from tornadoes, and this year has been no different. So what can your organization do to reduce the impact of twisters as much as possible during this tornado season?

Identifying the Threats

Already in 2019 several states have experienced higher than average tornado activity and the devastation of these extreme weather events. Mississippi, for example, has already seen four times as many tornadoes in the first four months of this year compared to the last four years.

Then, just last month, tornadoes swept through Ohio, killing one person, injuring several others, and leaving behind damage and devastation.

It’s clear that tornadoes, along with other extreme weather events, are a top priority when it comes to identifying threats to your community.

...

https://www.onsolve.com/blog/tornado-emergencies-what-you-can-do-today-to-prepare/

It’s important for business continuity professionals to be knowledgeable about the ways being unprepared for disruptions can harm their organization. How else are they going to make the case to senior management that building a good continuity program is worth the effort? 

In today’s post, we’ll look at the costs, direct and indirect, of being underprepared for a disaster event.

THE CASE OF CHERNOBYL

The new HBO miniseries Chernobyl is getting a lot of attention this month. The story of the fire that broke out in one of the reactors of the Chernobyl nuclear power plant hits close to home for anyone involved in business continuity and crisis management. The incident demonstrates an extreme example of multiple failures from oversight, lack of emergency planning, poor design, and poor crisis management.

The costs of the Chernobyl accident are widely thought to include dozens of deaths from intense radiation exposure, thousands of cases of lifespans being shortened by radiation, and 100,000 square kilometers of land being contaminated by fallout. The disaster also cost hundreds of billions of dollars to clean up and was blamed by Mikhail Gorbachev for having brought down the Soviet Union.

The costs will probably not be equally high if there is a disaster at your organization for which people are unprepared. But the impacts could still be substantial.

 ...

https://www.mha-it.com/2019/06/19/being-unprepared/

When Hurricane Irma threatened Hillsborough County, Fla., in 2017, emergency management staff into overdrive to prepare for the eventual Category 5 storm.

Luckily for Hillsborough, the brunt of the storm missed the county. But the feeling was that had it hit, the preparation would have been considered inadequate. “Everybody worked really hard during Irma, but when we got to the big day, they were all burned out,” County Administrator Mike Merrill told the Tampa Bay Times. “We didn’t have that strong base of planning and programs that could play out.”

An independent audit after the fact showed that, among other things, the county had not hardened its four field operations centers where disaster assessment teams strategize about some of the logistical aspects of response and recovery like damage assessment and situational awareness. It was decided a complete overhaul of the Emergency Management Division would be necessary.

...

https://www.govtech.com/em/preparedness/Hurricane-Irma-Prompts-Revamp-of-Emergency-Management-Agency.html

How a security researcher learned organizations willingly hand over sensitive data with little to no identity verification.
 

The European Union's General Data Protection Regulation (GDPR) has a provision called "Right of Access," which states individuals have a right to access their personal data. What happens when companies holding this data don't properly verify identities before handing it over?

This became the crux of a case study by James Pavur, DPhil student at Oxford University, who sought to determine how organizations handle requests for highly sensitive information under the Right of Access. To do this, he used GDPR Subject Access Requests to obtain as much data as possible about his fiancée – with her permission, of course – from more than 150 companies.

Shortly after GDPR went into effect last May, Pavur became curious about how social engineers might be able to exploit the Right of Access. "It seemed companies were in a panic over how to implement GDPR," he explains. Out of curiosity he sent a few Subject Access Requests, which individuals can make verbally or in writing to ask for access to their information under GDPR.

...

https://www.darkreading.com/endpoint/with-gdprs-right-of-access-who-really-has-access/d/d-id/1335013

Owning a small business has many rewards, like freedom, independence and the chance to financially benefit from your own hard work.  But there are also major challenges, like long hours, hungry competitors, and cash-flow problems.

One of the challenges that lands squarely on the shoulders of the small business owner is risk management. Whereas larger firms have the funds to hire specialists whose sole concern is identifying and preparing for threats to the business, Arthur the accountant and Mia the mover must take on that role themselves.

Natural disasters are a type of risk that can strike a business at any time. Luckily for Arthur and Mia, business insurance often comes with loss prevention expertise offered by many insurance carriers to their clients. An agent or broker can create a disaster and recovery plan customized for any business.

...

http://www.iii.org/insuranceindustryblog/have-a-disaster-plan-for-your-small-business/

In our ‘throw away’ society, the linear model of make, use and discard is depleting the resources of our planet – and our pockets. The solution is a circular economy, where nothing is wasted, rather it gets reused or transformed. While standards and initiatives abound for components of this, such as recycling, there is no current agreed global vision on how an organization can complete the circle. A new ISO technical committee for the circular economy has just been formed to do just that.

It’s a well-known fact that the rise in consumerism and disposable products is choking our planet and exhausting it at the same time. Before we reach the day where there is more plastic in the sea than fish, something has to be done to ebb the flow. According to the World Economic Forum, moving towards a circular economy is the key, and a ‘trillion-dollar opportunity, with huge potential for innovation, job creation and economic growth’.

A circular economy is one where it is restorative or regenerative. Instead of buy, use, throw, the idea is that nothing, or little is ‘thrown’, rather it reused, or regenerated, thus reducing waste as well as the use of our resources.

...

https://www.iso.org/news/ref2402.html

New global research from Morrison & Foerster reveals that GCs want to be evaluated on how effectively they protect their companies from risk and reputational damage. As Paul Friedman explains, they’ll have plenty of opportunity to prove themselves. 

As globalization and technological transformation march forward and reshape the business landscape, risks lurk in more places than ever. As we’ve seen so many times, failing to address these risks can quickly result in devastating financial and reputational disasters.

One day, it’s a data breach exposing private customer data. The next, it’s a regulatory investigation into allegations of bribery halfway around the world. And on and on.

The increasing frequency of these cautionary case studies covering such a broad range are, of course, why risk management issues, so often viewed as a backwater in the past, are getting more attention than ever in the C-suite and boardroom. Perhaps it also helps explain the growing stature of the general counsel, who is often tasked with communicating the most pressing risks to the CEO and board. Crucially, GCs are also charged with conveying the benefits of having a plan to proactively manage risks and to mitigate them. It’s a role GCs are embracing, according to new global research by Morrison & Foerster.

...

https://www.corporatecomplianceinsights.com/with-redefined-risks-gcs-must-redefine-their-role/

Effective disaster response requires expertise across a wide range of sectors. How various sectors interact and exchange information can be critical to a positive outcome. Often, each sector having operated within its own domain, has limited visibility into other sectors until reporting out to senior leadership. This often results in excellent work within a narrow “silo” of that domain, but leaves room for improvement in the overall picture, helping the communities impacted by a disaster.

FEMA has recently incorporated “community lifelines” into their National Response Framework. These community lifelines seek to improve the efficacy of work done within the sectors by ensuring a purposeful connection to the benefit of the impacted communities.

...

https://www.riskandresiliencehub.com/the-power-of-cross-sector-collaboration-for-economic-recovery-in-disaster-response/

(TNS) - About 80 firefighters tested their emergency preparedness skills in Montour County during a drill on Tuesday evening.

Crews responded to scenarios where tornados struck Emmanuel Nursing Home and Maria Joseph Manor within minutes of each other Tuesday evening.

While it was a drill, one Danville firefighter was actually overcome by the heat. A.J. Barnes, a member of the Continental Fire Company, was checked out at the scene by a Danville ambulance crew. She said she was a "little warm."

"This is probably the biggest drill I can remember with two incidents at the same time," said Leslie Young, chief of the Mahoning Township East End Fire Department.

...

https://www.govtech.com/em/preparedness/Firefighters-use-Disaster-Drills-to-Prepare-for-Emergencies.html

Business leaders in the financial services (FS) industry are used to tracking success with measures that reflect shareholder, investor, and market regulator values such as return on equity, net profit, assets under management, and capital adequacy ratio. This is the “money story.” However, most don’t always know how these important measures are affected by customer experience (CX) and customer engagement, or the “customer story.”

As leaders in the FS sector increasingly embrace customer-centric strategies, they will have to credibly connect the customer story to the money story. Otherwise, the legitimacy of often-heard CX slogans and promises can be easily challenged.

...

https://go.forrester.com/blogs/hardwire-cx-to-financial-performance-in-the-financial-services-sector/

By its very definition, emergency management is a field that deals constantly with challenges. Back in 2005, we co-authored an article that examined some specific “critical obstacles” facing emergency managers at the time, including: an imbalance of focus between homeland security and natural disaster management, the challenge of involving the public in preparedness planning, the lack of an effective partnership with the business community, cuts to EM funding, and questions surrounding the evolving organizational structure of the nation’s emergency management system.

Now, 14 years later, we wanted to take a second look to see if any of those obstacles have been eliminated and examine what new challenges emergency managers may be facing. Here’s what we found.

...

https://www.riskandresiliencehub.com/challenges-facing-emergency-managers-today/

In a wide-ranging article, Geary W. Sikich enters the debate about the future of the risk assessment and the business impact analysis and pulls various threads together to conclude that targeted flexibility is the basis of the art of being prepared.

Some interesting points about how organizations apply current standards such as business impact analysis / assessment (BIA), risk assessment methodologies, compliance and planning has been written about lately.  Each author presents good arguments for their particular methodology.

Antifragile: Nassim Taleb created the concept of ‘antifragile’ because he did not feel that ‘resilience’ adequately described the need for organizations to be ready to absorb the impact of an event and bounce back quickly. According to McKinsey & Co., “Resilient executives will likely display a more comfortable relationship with uncertainty that allows them to spot opportunities and threats and rise to the occasion with equanimity.” 

Adaptive BC: Adaptive Business Continuity, according to its manifesto, is an approach for continuously improving an organization’s recovery capabilities, with a focus on the continued delivery of services following an unexpected unavailability of people, locations, and/or resources.

There seems to be a lot of controversy about some of the tenets of these various approaches.

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/4122-adaptive-antifragile-resilient-or-just-trying-to-be-compliant

Disaster recovery is an important process within business continuity management (BCM) that focuses on developing a plan of action to recover from a potential internal or external business threat. In other words, disaster recovery is about ensuring that the business infrastructure, systems, and data can all be accessed and/or recovered in the event of a business crisis. A well-planned disaster response enables the business to continue operations in any potentially damaging situation.

Though some disasters cannot be avoided, proper planning can help to ensure that the business is prepared for a disaster and is able to recover its systems, applications, data, and other technology and infrastructure. To create a solid disaster recovery plan, the business must identify its most critical applications, systems, technologies, etc. If an item is identified as critical, that means the business would be unable to operate without it and, therefore, any critical item should be prioritized for a solution or workaround. This is necessary to ensure continuity of operations.

A disaster, in terms of business continuity, includes internal business issues as well as IT outages and system hacks. Because the umbrella of potential threats is so wide, it is important to have a firm understanding of all of the various ways that business could be impacted and to have a recovery plan for each disaster.

...

https://www.riskandresiliencehub.com/what-is-it-disaster-recovery/

Thursday, 20 June 2019 14:16

What is IT Disaster Recovery?

Do we understand how to avoid the risk of our devices being compromised?

For most of us, our mobile phones are an extension of ourselves. Our daily routine, interests, personality and vital information are all stored on devices that we take with us on-the-go to assist or manage our lives. This then brings to question; do we understand how to avoid the risk of our devices being compromised? This article will be looking into the security of your phone and assessing the options available to maintain that security.

Safe Communication

Communication is pivotal to crisis prevention, mitigation and recovery, and is also the primary purpose of a mobile phone. If you are in the crisis response team or on the receiving end of their assistance, communication channels must be kept robust and secure. Certain apps espouse the benefits of their encrypted communication channels but not all have end-to-end encryption. This means that the company managing the app will have access to observing conversations and potentially the recording of phone calls.

Your teams are likely to be passing on sensitive information, both in and out of a crisis. Although it is unlikely that a private company would gather such data to be used maliciously, it wouldn’t stop hackers trying to penetrate the app’s security or government organisations from demanding access. The most well-known app for its data protection is WhatsApp, however, others also include end-to-end encryption like Viber, Line and Telegram. In fact, WhatsApp was exposed to have a flaw in its software that was being abused by hackers in May of this year. This has since been patched, but those using WhatsApp for personal and especially business purposes ought to be aware of the available alternatives.

...

https://www.preparedex.com/how-mitigate-risk-on-phone/

Thursday, 20 June 2019 14:15

How Can You Mitigate Risk On Your Phone?

‘We are particularly honored to have ServiceNow implement MaestroRS because ServiceNow provides the platform and ecosystem in which we operate,’ Aaron Callaway, managing director, Fairchild Resiliency Systems tells CRN. ‘ServiceNow recognizes the domain expertise in their partner community.’

 

By O’Ryan Johnson

A ServiceNow partner recently had one of the business continuity and disaster recovery apps it created for the company’s app store selected to be deployed inside ServiceNow itself – a remarkable turn for a partner that did zero business with the IT service management giant just five years ago.

“We are particularly honored to have ServiceNow implement MaestroRS because ServiceNow provides the platform and ecosystem in which we operate,” Aaron Callaway, managing director, Fairchild Resiliency Systems told CRN. “ServiceNow recognizes the domain expertise in their partner community, specifically what Fairchild can provide to their customers. This validates our expertise on platform and also allows us to further innovate with the ServiceNow team.”

The Andover, Mass.-based company was an MSP focused on integrating BCDR solutions, and had no ServiceNow business back in 2014, however its customers were soon demanding expertise on the platform.

...

https://www.crn.com/news/channel-programs/servicenow-picks-partner-s-disaster-recovery-app-for-internal-use

The development follows speculation and concern among security experts that the attack group would expand its scope to the power grid.
 

The attackers behind the epic Triton/Trisis attack that in 2017 targeted and shut down a physical safety instrumentation system at a petrochemical plant in Saudi Arabia now have been discovered probing the networks of dozens of US and Asia-Pacific electric utilities.

Industrial-control system (ICS) security firm Dragos, which calls the attack group XENOTIME, says the attackers actually began scanning electric utility networks in the US and Asia-Pacific regions in late 2018 using similar tools and methods the attackers have used in targeting oil and gas companies in the Middle East and North America.

The findings follow speculation and concern among security experts that the Triton group would expand its scope into the power grid. To date, the only publicly known successful attack was that of the Saudi Arabian plant in 2017. In that attack, the Triton/Trisis malware was discovered embedded in a Schneider Electric customer's safety system controller. The attack could have been catastrophic, but an apparent misstep by the attackers inadvertently shut down the Schneider Triconex Emergency Shut Down (ESD) system.

...

https://www.darkreading.com/perimeter/triton-attackers-seen-scanning-us-power-grid-networks/d/d-id/1334968

(TNS) — The era of available electricity whenever and wherever needed is officially over in wildfire-plagued California.

Pacific Gas & Electric served stark notice of that “new normal” this past weekend when it pre-emptively shut power to tens of thousands of customers in five Northern California counties. The utility warned that it could happen again, perhaps repeatedly, this summer and fall as it seeks to avoid triggering disastrous wildfires.

The dramatic act has prompted questions and concerns: What criteria did PG&E use? Did the shutdowns prevent any fires? And what can residents do to prepare for what could be days without electricity?

The managed outages were broad but brief, affecting 22,000 customers in Napa, Yolo, Solano, Butte and Yuba counties for a handful of hours. That included Paradise and Magalia, two towns that were devastated seven months ago during the Camp Fire, a massive blaze triggered by high winds hitting PG&E transmission lines.

...

https://www.govtech.com/em/preparedness/More-Power-Blackouts-are-Coming-to-California-How-to-Prepare.html

Security should be a high priority for every organization. Unfortunately, there is a serious shortage of quality cybersecurity staffers on the market.

hero-blog-cartoon-security-practices.jpg

Who’s overseeing your organization’s security? Are they equipped to secure your data and prevent ransomware attacks, or are they more likely to be scanning for viruses with a metal detector and patching systems with tape and paper?

When (ISC)2 asked cybersecurity professionals about gaps in their workforce, 63% said there’s a short supply of cybersecurity-focused IT employees at their companies. And 60% believe their organizations are at “moderate-to-extreme” risk of attacks because of this shortage.    

Mitch Kavalsky, Director, Security Governance and Risk at Sungard Availability Services (Sungard AS), believes you can solve this problem by focusing less on hiring cybersecurity personnel with expertise in specific technologies, and more on bringing in employees with well-rounded security-focused skillsets capable of adapting as needed.

But as Bob Petersen, CTO Architect at Sungard AS, points out, a company’s overall security should not be limited to the security team; it needs to be a key component of everyone’s job. “There needs to be more of a push to drive cybersecurity fundamentals into different IT roles. The role of the security team should be to set standards, educate and monitor. They can’t do it all themselves.”

Invest in your company’s security. But invest in it the right way – with the right people. If not, you’re bound to have more problems than solutions.

(TNS) — Three of the twisters that ripped through the region on Memorial Day night left 631 homes in Montgomery County communities unlivable, according to a preliminary Ohio Emergency Management Agency assessment released today.

Tornadoes destroyed 211 homes and 43 businesses in Montgomery County, according to emergency management officials. The tornadoes caused major damage to another 420 homes and 54 businesses.

Homes either destroyed or with major damage are deemed uninhabitable, according to county emergency management officials.

In all 2,550 homes and 173 businesses were affected, according to the initial survey.

“Our community was devastated by this storm, and our preliminary assessment shows the extent of the damage,” said Montgomery County Commission President Debbie Lieberman.

...

https://www.govtech.com/em/disaster/Montgomery-County-Ohio-Tornadoes-Left-More-Than-600-Homes-Unlivable.html

Cutting-Edge Baron Radars Reinforce Trust—If You Understand the Basic Mechanics of the Tool

 

By Dan Gallagher, Enterprise Product Manager, Baron

In late November of 2014, residents of the Buffalo, New York area busied themselves digging out of the heaviest winter snowfall event since the holiday season of 1945. Over five feet of snow blanketed some areas east of the city. This sort of extreme weather levies serious damage: thousands of motorists were stranded, hundreds of roofs and other structures collapsed, and, tragically, thirteen people lost their lives. Perhaps this high toll would have been even greater had not diligent meteorologists caught the telltale signs of lake effect snow days before by studying lake temperature and wind trajectories at various heights. Officials warned of 3-5 inches per hour of precipitation over a day before the event began. However, this extreme snowfall did not appear on the T.V. weather map in the way such a major event usually does. The casual weather-watcher might have looked to radar imaging to make sense of the experience, but that most conspicuous weather instrument to most people was essentially blind to the phenomenon. This sort of apparent failure could make it difficult for people to believe weather technology, or create doubt in the minds of officials charged with making tough decisions for public or institutional safety—and each of those outcomes could endanger communities. While Doppler radar is a critical tool in evaluating weather conditions, it is important for institutions to understand the mechanisms it employs, its strengths and weaknesses, and available methods for analyzing raw radar data. Why does it seem like radar got it wrong for Buffalo?

Doppler RADAR Fundamentals

RADAR, or RAdio Detection And Ranging, was a new technology when Buffalo saw its last huge snow event. Developed during World War Two, radar was first used for military applications. The tool could detect the position and movement of an object, like an enemy airplane. So, when radar operators on battleships picked up signs of rain, it made their jobs more difficult by cluttering radar data with unwanted information. Then, after the War, that clutter became the target.

Picture2The first generation of weather radars were cutting edge at the time, but were rudimentary compared to today’s systems. Radar works by sending out radio waves, short bursts of energy that bounce off of objects at nearly the speed of light. When waves bounce back in the direction of the radar dish, their direction reveals where an object is relative to the system. These waves are sent out in bands. The first weather radar system, installed in Miami in 1959, could only send waves along horizontal bands, and operators had to manually adjust the elevation angle. This meant that the information radar could provide about an object was limited to a single plane. A ball and a cylinder would look the same as each other on the radar screen because only one dimension of an object was accounted for.

The next generation of radar, embraced in the late 1980s and early ‘90s, offered more information than simply location with the introduction of Doppler technology. The major advance that Doppler radar provided was that it could measure an object’s velocity. These radars can detect what is moving toward it or away from it by analyzing the shift in the frequency of returning radio waves. (Still, radar cannot ‘see’ what is moving orthogonally to the beam.) This allows atmospheric scientists and meteorologists to identify additional characteristics in a storm. For example, they can identify the presence of rotating winds in the atmosphere, which can be a strong indication of a tornado. Generally represented in today’s system as bright red juxtaposed to bright green, data on wind speed and location provides greater insight into such important weather events such as tornadoes.

Dual Polarization: The Modern Radar

The radars the National Weather Service (NWS) uses today form a network of 171 radars located throughout the United States. In 2007, Baron Services, along with their partner L3 Stratus, were selected to work with the NWS to modernize the entire network of radars to add Dual Polarization technology.

Early Doppler systems had not resolved a major limitation inherited from previous generations: the single plane of weather information. Embraced in the mid-2000s, dual-polarization technology changed that. By using both horizontal and vertical pulses, dual-pol radars offer another dimension of information on objects—a true cross-section of what is occurring in the atmosphere. With data on both the horizontal and vertical attributes of an object, forecasters can clearly identify rain, hail, snow, and other flying objects such as insects, not to mention smoke from wildfires, dust, and military chaff. Each precipitation type registers a distinct shape. For example, hail tumbles as it falls, so it appears to be almost exactly round to dual-pol radars. This additional information provides meteorologists and those responsible for monitoring the weather with very valuable information that allows them to make more informed decisions about the presence of hail, the amount of rainfall that may fall, and any change from liquid to frozen precipitation.

Dual polarization technology is especially useful in observing winter weather. Because it can differentiate between types of precipitation, it can identify where the melting layer is with precision. This allows forecasters to better evaluate what type of precipitation to expect, as they can analyze information about the path that precipitation will face on the way to the ground. If a snowflake will pass through a warm layer, for example, it may become a raindrop, and that raindrop may turn into freezing rain when it hits a colder surface.

Radar Limitations – and How Baron Uses Technology to Fight Back

limitsStill, radar technology has its undeniable limitations. Only so much of the vertical space is observed, since the atmosphere closest to the ground and directly above the radar are typically not scanned. This is best illustrated by the cone of silence phenomenon. Because radars do not transmit higher than a certain angle relative to the horizon, there is a cone-shaped blind spot above each radar.

Also, because radio waves are physical, clutter can make data hard to parse. Tall buildings, for example, give no useful information to meteorologists, and could skew the data. However, Baron has introduced cutting-edge radar processing products that can closely analyze the data that returns to a radar to determine how the atmosphere has impacted the path of a beam. Importantly, though, computers are not the only way, or always the best way, to account for deviations in data. Thus, Baron’s human resources—their in-house weather experts—are daily engaged with monitoring radar outputs.

Nevertheless, radar cannot achieve every goal some members of the public expect it to. Drizzle, for example, does not show up very well at times, because of the extremely small size of particulates and the fact that most drizzle occurs below the height of the radar beam. So, people expecting dry conditions might be puzzled by slight sprinkles on their way to work. Another example, as mentioned earlier, is the difficulty of lake effect snow.

baron buffaloPicture1A lake effect snowfall occurs when cold air passes over the warmer water of a lake. This phenomenon is common in the Great Lakes region. Why does radar have difficulty observing it? The culpable limitation lies with the angle of radar beam. In the same way that radio waves are not typically sent straight up from the ground, they are also not sent directly horizontal. Buildings, small topographical changes, and the like would foil the effectiveness of low-level radar sweeps in many cases anyway, but the further an area is away from the source of the beam, the higher the blind spot. Imagine a triangle with a small angle at one corner—the further the lines travel, the further they part. Lake effect snow is a low-level phenomenon. So, when Buffalo shivered under feet of snow in 2014, the radar essentially missed the event because the lowest beams passed over the highest signs of snow.

Reading the Radar Display: What to Remember

Understanding the history of radar technology and knowing the source of the radar display can prepare anyone to better utilize radar information. Conscientious administrators and officials can recall these radar facts as part of their decision-making process when referencing radar data:

1. Radar cannot see everything.

As apparent in the Buffalo snowfall example, the physical limitations of radar mean that a radar does not ‘see’ what occurs close to the ground. Additionally, since what radar ‘sees’ in the atmosphere is impacted by distance as the beam of energy continues to radiate away from the source, the precision of data can depend on the location of weather relative to the location of a radar. Some radar displays are compilations of data from multiple radars, and these multiple radar displays are often pieced together to account for gaps in radar coverage.

2. No one image tells the whole story.

Weather evolves in stages, so it is important to watch storm motion, growth and decay. When watching weather on radar, how it trends from one scan to the next should never be ignored. Evaluating trends allows watchers to better predict turbulent weather. Then, for the best results, avoid focusing on only one storm. People tend to watch a particular storm and not the entire picture, or what is down stream of the “worst” storm. This can distract from impactful conditions and trends.

3. Raw data can be misleading.

Even when radar ‘sees’ weather accurately, it can at times mislead watchers. Virga, for example, is precipitation that evaporates before it reaches the ground due to drier air closer to the surface.  Just because the radar shows precipitation does not always mean precipitation will reach the ground. In some cases, there is no substitute for using other information to determine what is actually happening. That is why NEXRAD/Baron delivers more than just a radar image to give users a better understanding of what the weather is doing. It gives information on Severe Winds, Threats, Velocity, Hail Tracks, and other conditions.

Baron Radar Equipment Gives Decision-Makers Actionable Information

Baron Threat Net is used by stadiums, emergency management, schools, racetracks, and other institutions because it features radar technology that makes it easier for users to identify what the risk is by removing all of the extraneous information and processing inevitable problems in the data to create a simplified picture of the weather. However, to do that, it uses computing power, scientific expertise, and other tools. Radar should not be solely relied upon to identify every weather phenomenon. It is important that decision-makers understand the limitations of the technologies they utilize in order to not only make the best decisions for their communities, but to reinforce community trust.

The Most Common Overt & Covert Disruptions Businesses Face Today

It’s imperative that companies understand the risks and potential impacts of business interruptions, regardless of the cause. In this report, we gathered
industry data on what’s been prompting business interruptions (BI) in recent years, highlighting the extent of disruptions and their cost. Having mapped out today’s current landscape of BI, we wanted to once again demonstrate the integral role of a Business Continuity Disaster Recovery plan in a company’s ecosystem.

...

https://www.agilityrecovery.com/resources/the-current-state-of-business-interruption/

First forecast: ‘Don’t let a weak El Niño fool you’

 

By Brian Wooley & Paul Licata of Interstate Restoration

The first hurricane prediction for 2019 was less alarming than many prior years, with only two major hurricanes forecast to hit the U.S. coast. Hurricane researchers at Colorado State University announced in April they foresee a slightly below-average Atlantic hurricane season, citing a weak El Niño and a slightly cooler tropical Atlantic ocean as major contributors. Their second, often more accurate forecast is due June 4, and NOAA announces its first forecast of the hurricane season on May 23.

But don’t be fooled. Early predictions in 2017 also pointed to a slightly below-average Atlantic hurricane season, but in that year hurricanes Harvey, Irma and Maria slammed into the Atlantic and Gulf coasts as well as Puerto Rico and became three of the five costliest hurricanes in U.S. history.

Each storm is different and unpredictable which means business and property owners shouldn’t become complacent; it’s extremely important to prepare in advance for major hurricanes. According to the Federal Emergency Management Agency (FEMA), 40 percent of businesses do not reopen after a disaster and another 25 percent fail within one year. Preplanning is important because it will streamline operations during and after a storm, lead to a quicker recovery and potentially lower insurance claims costs.

According to an April 10 webinar hosted by Dr. Phil Klotzbach of the Department of Atmospheric Science at CSU, researchers are predicting 13 named storms during the 2019 Atlantic hurricane season, with two becoming major hurricanes. (In comparison, in 2018, CSU predicted 14 named storms with three reaching major hurricane strength.) Historical data, combined with atmospheric research, points to the U.S. East Coast and Florida Peninsula with about a 28 percent chance of getting hit by a major hurricane (average for the last century is 31 percent). The Gulf Coast, from the Florida panhandle westward to Brownsville, Texas, is forecast to have a 28 percent chance (average for the last century is 30 percent). The Caribbean has a 39 percent, down from the 42 percent average for the last century.

The states with the highest probability to receive sustained hurricane-force winds include Florida (47 percent), Texas (30 percent), Louisiana (28 percent), and North Carolina (26 percent), according to Klotzbach. But hurricanes can cut a wide swath, he says, as Hurricane Michael did in October 2018 as it moved into Georgia, causing high wind damage and gusts as high as 115 mph in the southwest part of the state.

Klotzbach outlined how total dollar losses from Atlantic hurricanes are increasing each year, exacerbated primarily from a doubling of the U.S. population since the 1950s and from larger homes being built, with an average of more than 2,600 square feet. It’s shocking to consider that if a category 4 storm struck Miami today, similar to the one that leveled the city in 1926, it’s estimated it would cost $200 billion to rebuild. That exceeds $160 billion in damage caused by Hurricane Katrina in 2005.

Experienced recovery experts, like those at Interstate Restoration, are skilled at delivering a quick response and delegating teams to react as soon as a storm is named. Once they identify approximately where the storm will land on U.S. soil and assess its intensity, they allocate assets, resources, and equipment as needed and keep in close contact with all clients in the path of the storm.  Staging efforts to a safe area begin many days before the event.

Powerful hurricanes, such as Irma, can disrupt businesses for weeks or months, which is why pre-planning is so important. It starts with hiring a disaster response company in advance. By establishing a long term partnership before a disaster happens, business and property owners can ensure they are on the priority list for getting repairs done quickly. The restoration partner can also assist with performing a pre-loss property assessment, recovery planning and working closely with insurance.

Quick recovery is made more difficult when business and property owners neglect proper preparations. So despite the prediction calling for fewer storms expected in 2019, it’s always better to be prepared as Mother Nature can be destructive.

 ♦

Brian Wooley, vice president of operations, and Paul Licata, national account manager, at Interstate Restoration, a national disaster-response company based in Ft. Worth, Texas.

The first anniversary of GDPR is rapidly approaching on May 25. Tech companies used the past year to learn how to navigate the guidelines set in place by the law while ensuring compliance with similar laws globally. After all, for companies who violate GDPR, the legal ramifications include fines that amount to the higher between a four percent worldwide revenue or around $22.4 million.

Although the GDPR primarily applies to countries in the European Union, the law’s reach has extended beyond the continent, affecting tech companies stateside. As long as a US-based company has a web presence in the EU, that company must also follow GDPR guidelines. In an increasingly globalized world, that leaves few companies outside the mix.

GDPR acts as a model for tech companies looking to focus on the consumer’s security and data protection and compliance. A year into its existence, there is still some work surrounding the comprehension and application of the GDPR’s requirements. For GDPR’s anniversary, we’ve gathered a few experts in IT to shed some light on the GDPR, its global effects and how to ensure data protection.

Alan Conboy, Office of the CTO, Scale Computing:

“With the one-year anniversary of GDPR approaching, the regulation has made an impact in data protection around the world this century. One year later with the high standards from GDPR, organizations are still actively working to manage and maintain data compliance, ensuring it’s made private and protected to comply with the regulation. With the fast pace of technology innovation, one way IT professionals have been meeting compliance is by designing solutions with data security in mind. Employing IT infrastructure that is stable and secure, with data simplicity and ease-of-use is vital for maintaining GDPR compliance now and in the future,” said Alan Conboy, Office of the CTO, Scale Computing.

Samantha Humphries, senior product marketing manager, Exabeam:

“As the GDPR celebrates its first birthday, there are some parallels to be drawn between the regulation and that of a human reaching a similar milestone. It’s cut some teeth: to the tune of over €55 million – mainly at the expense of Google, who received the largest fine to date. It is still finding its feet: the European Data Protection Board are regularly posting, and requesting public feedback on, new guidance. It’s created a lot of noise: for EU data subjects, our web experience has arguably taken a turn for the worse with some sites blocking all access to EU IP addresses and many more opting to bombard us with multiple questions before we can get anywhere near their content (although at least the barrage of emails requesting us to re-subscribe has died down). And it has definitely kept its parents busy: in the first nine months, over 200,000 cases were logged with supervisory authorities, of which ~65,000 were related to data breaches.

With the GDPR still very much in its infancy, many organisations are still getting to grips with exactly how to meet its requirements. The fundamentals remain true: know what personal data you have, know why you have it, limit access to a need-to-know basis, keep it safe, only keep it as long as you need it, and be transparent about what you’re going to do with it. The devil is in the detail, so keeping a close watch on developments from the EDPB will help provide clarity as the regulation continues to mature,” said Samantha Humphries, senior product marketing manager, Exabeam.

Rod Harrison, CTO, Nexsan, a StorCentric Company:

“Over the past 12 months, GDPR has provided the perfect opportunity for organisations to reassess whether their IT infrastructure can safeguard critical data, or if it needs to be upgraded to meet the new regulations. Coupled with the increasing threat of cyber attacks, one of the main challenges businesses have to contend with is the right to be forgotten – and this is where most have been falling short.

Any EU customers can request that companies delete all of the data that is held about them, permanently. The difficulty here lies in being able to comprehensively trace all of it, and this has given the storage industry an opportunity to expand its scope of influence within an IT infrastructure. Archive storage can not only support secure data storage in accordance with GDPR, but also enable businesses to accurately identify all of the data about a customer, allowing it to be quickly removed from all records. And when, not if, your business suffers a data breach, you can rest assured that customers who have asked you to delete data won’t suddenly discover that it has been compromised,” said Rod Harrison, CTO, Nexsan, a StorCentric Company.

Alex Fielding, iCEO and Founder, Ripcord:

“If your company handles any data of European Union residents, you’re subjected to the regulations, expectations and potential consequences of GDPR. Critical elements of the regulation like right to access, right to be forgotten, data portability and privacy by design all require a company’s data management to be nimble, accessible and—most importantly—digital.

Notably, GDPR grants EU residents rights to access, which means companies must have a documented understanding of whose data is being collected and processed, where that data is being housed and for what purpose it’s being obtained. The company must also be able to provide a digital report of that data management to any EU resident who requests it within a reasonable amount of time. This is a tall order for a company as is, but compliance becomes almost unimaginable if a company’s current and archival data is not available digitally.

My advice to anyone struggling to achieve and maintain GDPR compliance is to develop and implement a full compliance program, beginning with digitizing and cataloguing your customer data. When you unlock the data stored within your paper records, you set your company up for compliance success,” said Alex Fielding, iCEO and founder of Ripcord.

Wendy Foote, Senior Contracts Manager, WhiteHat Security:

“Last year, the California Consumer Privacy Act (CCPA) was signed into law, which aims to provide consumers with specific rights over their personal data held by companies. These rights are very similar to those given to EU-based individuals by GDPR one year ago. The CCPA, set for Jan. 1, 2020, is the first of its kind in the U.S., and while good for consumers, affected companies will have to make a significant effort to implement the cybersecurity requirements. Plus, it will add yet another variance in the patchwork of divergent US data protection laws that companies already struggle to reconcile.

If GDPR can be implemented to protect all of the EU, could the CCPA be indicative of the potential for a cohesive US federal privacy law? This idea has strong bipartisan congressional support, and several large companies have come out in favor of it. There are draft bills in circulation, and with a new class of representatives recently sworn into Congress and the CCPA effectively putting a deadline on the debate, there may finally be a national resolution to the US consumer data privacy problem. However, the likelihood of it passing in 2019 is slim.

A single privacy framework must include flexibility and scalability to accommodate differences in size, complexity, and data needs of companies that will be subject to the law. It will take several months of negotiation to agree on the approach. But we are excited to see what the future brings for data privacy in our country and have GDPR to look to as a strong example,” said Wendy Foote, Senior Contracts Manager, WhiteHat Security.

Scott Parker, Director, product marketing, Sinequa:

“Even before the EU’s GDPR regulation took effect in 2018, organizations had been investing heavily in related initiatives. Since last year, the law has effectively standardized the way many organizations report on data privacy breaches. However, one area where the regulation has proven less effective is allowing regulators to levy fines against companies that have mishandled customer data.

From this perspective, organizations perceiving the regulation as an opportunity versus a cost burden have experienced the greatest gains. For those that continue to struggle with GDPR compliance, we recommend looking at technologies that offer an automated approach for processing and sorting large volumes of content and data intelligently. This alleviates the cognitive burden on knowledge workers, allowing them to focus on more productive work, and ensures that the information they are using is contextual and directly aligned with their goals and the tasks at hand,” said Scott Parker, Director, product marketing, Sinequa.

Caroline Seymour, VP, product marketing, Zerto:

“Last May, the European Union implemented GDPR, but its implications reach far beyond the borders of the EU. Companies in the US that interact with data from the EU must also meet its compliance measures, or risk global repercussions.

Despite the gravity of these regulations and their mutually agreed upon need, many companies may remain in a compliance ‘no man’s land’– not fully confident in their compliance status. And as the number of consequential data breaches continue to climb globally, it is increasingly critical that companies meet GDPR requirements. My advice to those impacted companies still operating in a gray area is to ensure that their businesses are IT resilient by building an overall compliance program.

By developing and implementing a full compliance program with IT resilience at its core, companies can leverage backup via continuous data protection, making their data easily searchable over time and ultimately, preventing lasting damage from any data breach that may occur.

With a stable, unified and flexible IT infrastructure in place, companies can protect against modern threats, ensure regulation standards are met, and help provide peace of mind to both organizational leadership and customers,” said Caroline Seymour, VP, product marketing, Zerto.

Matt VanderZwaag, Director, product development, US Signal:

“With the one-year anniversary of GDPR compliance upcoming, meeting compliance standards can still be a somewhat daunting task for many organizations. A year later, data protection is a topic that all organizations should be constantly discussing and putting into practice to ensure that GDPR compliance remains a top priority.

Moving to an infrastructure provided by a managed service provider with expertise is one solution, not only for maintaining GDPR compliance, but also implementing future data protection compliance standards that are likely to emerge. Service providers can ensure organizations are remaining compliant, in addition to offering advice and education to ensure your business has the skills to manage and maintain future regulations,” said Matt VanderZwaag, Director, product development, US Signal.

Lex Boost, CEO, Leaseweb USA:

“GDPR has played an important role in shifting attitude toward data privacy all around the world, not just in the EU. Companies doing business in GDPR-regulated areas have had to seriously re-evaluate their data center strategies throughout the past year. In addition, countries outside of the GDPR regulated areas are seriously considering better legislation for protecting data.

From a hosting perspective, managing cloud infrastructures, particularly hybrid ones, can be challenging, especially when striving to meet compliance regulations. It is important to find a team of professionals who can guide how you manage your data and still stay within the law. Establishing the best solution does not have to be a task left solely to the IT team. Hosting providers can help provide knowledge and guidance to help you manage your data in a world shaped by increasingly stringent data protection legislation,” said Lex Boost, CEO, Leaseweb USA.

Neil Barton, CTO, WhereScape:

“Despite the warnings of high potential GDPR fines for companies in violation of the law, it was never clear how serious the repercussions would be. Since the GDPR’s implementation, authorities have made an example of Internet giants. These high-profile fines are meant to serve as a warning to all of us.

Whether your organization is currently impacted by the GDPR or not, now’s the time to prepare for future legislation that will undoubtedly spread worldwide given data privacy concerns. It’s a huge task to get your data house in order, but automation can lessen the burden. Data infrastructure automation software can help companies be ready for compliance by ensuring all data is easily identifiable, explainable and ready for extraction if needed. Using automation to easily discover data areas of concern, tag them and track data lineage throughout your environment provides organizations with greater visibility and a faster ability to act. In the event of an audit or a request to remove an individual’s data, automation software can provide the ready capabilities needed,” said Neil Barton, CTO, WhereScape

By Brian Zawada
Director of Consulting Services, AvalutionConsulting

Adaptive BC has done a great job of stirring up the business continuity profession with some new ideas. At Avalution – we love pushing the envelope and try new things, so we were excited to learn more about the ideas in the Adaptive BC manifesto, as well as the accompanying book and training.

While Adaptive BC identified some real problems with the business continuity approaches taken by some organizations, their solutions aren’t for everyone (and not all organizations experience these problems). In fact, their focus is so narrow, we think it’s of little practical use for most organizations.

Business Continuity as Defined by Adaptive BC

From AdaptiveBCP.org: “Adaptive Business Continuity is an approach for continuously improving an organization’s recovery capabilities, with a focus on the continued delivery of services following an unexpected unavailability of people, locations, and/or resources” (emphasis on "recovery” added by Avalution).

As is clear from their definition and made explicit in the accompanying book (Adaptive Business Continuity: A New Approach - 2015), Adaptive BC is exclusively focused on improving recovery when faced with unavailability of people, locations, and other resources.

This approach – or focus – leaves out a long list of responsibilities that add considerable value to most business continuity management programs, such as (the quotes below are taken from Adaptive BC’s book):

...

https://www.linkedin.com/pulse/adaptive-bc-most-brian-zawada/

Tuesday, 21 May 2019 19:33

Adaptive BC: Not for Most

In June 2017 Continuity Central published the results of a survey which looked at whether attitudes to the business impact analysis and risk assessment were changing. Two years on, we are repeating the survey to determine whether there has been any development in thinking across the business continuity profession. The survey closes on May 31st, and the interim results can now be viewed.

The original survey was carried out in response to calls by Adaptive BC for the removal of the business impact analysis and risk assessment from the business continuity process.

The interim results of this year’s survey are as follows:

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/4020-to-bia-or-not-to-bia-revisited-interim-survey-results

I have been the Head of Thought Leadership at the Business Continuity Institute since September 2018. In the eight months since my start date, I have been quizzed about many topics by our members – Brexit preparedness, supply chain resilience, horizon scanning and cyber risk to name but a few. However, the topic which I get pressed to answer more than any other is “What is your view on Adaptive Business Continuity?”.

My research on the subject immediately took me to the Adaptive BC website and led me to purchase Adaptive BC – A New Approach in order to learn about the subject. Since this time, interest in the Adaptive-BC entitled “revolution” has gained significant traction with numerous articles from both the founders of Adaptive BC and those who are more sceptical about the subject. Articles such as David Linstedt’s 2018: The BC Ship Continues to Sink and Mark Armour’s Adaptive Business Continuity: Clearly Different, Arguably Better are being met with writing such as Alberto Mattia’s Adaptive BC Reinvents the Wheel and the very recent article from Jean Rowe challenging Adaptive BC’s approach (Adaptive BC: the business continuity industry’s version of The Emperor’s New Clothes?).

The so-called “revolution” has certainly stirred the BC community – but are the ructions justified?

...

https://www.thebci.org/news/adaptive-bc-a-revolution-or-a-useful-set-of-tools-and-approaches-in-the-right-circumstances.html

One of the biggest decisions companies face in conducting a Business Impact Analysis (BIA) is what use, if any, they will make of software in doing it. In today’s post we’ll look at the main software options available for doing BIAs, discuss which work best for which types of organizations, and share some tips that can help you succeed no matter what approach you take to using software.

 
USING SOFTWARE TO DO BIAs

In broad terms, there are five approaches companies can take in using software to do their BIAs.

As a reminder, a BIA is the analysis organizations conduct of their business units to determine which processes are the most critically time sensitive, so they can make well-informed decisions in doing their recovery planning.

...

https://bcmmetrics.com/choosing-software-for-bias/

Mobile apps have become the touchpoint of choice for millions of people to manage their finances, and Forrester regularly reviews those of leading banks. We just published our latest evaluations of the apps of the big five Canadian banks: BMO, CIBC, RBC, Scotiabank, and TD Canada Trust.

Overall, they’ve raised the bar, striking a good balance between delivering robust, high-value functionality and ensuring that it’s easy for customers to get that value with a strong user experience. The top two banks in our review, CIBC and RBC, both made significant improvements to their app user experience (UX) over the past year by focusing on streamlining navigation and workflows. But our analysis also revealed ways all banks can — and should — improve, such as:

Banks should give customers a better view of their financial health. Banks we reviewed don’t provide external account aggregation, and they put the burden on the user to stay on top of their monthly inflows and outflows. They don’t offer useful features such as an account history view that displays projected balances after scheduled transactions hit the account — something leading banks in other regions of the world (like Europe and the US) do offer.

...

https://go.forrester.com/blogs/the-good-the-bad-the-ugly-of-canadian-mobile-banking-experiences-in-2019/

Learn about some of the latest findings on the devastation from a hurricane, and how to prepare your business to withstand this natural catastrophe. Read this infographic by Agility Recovery.

Agility HurricaneInfographic

Thursday, 16 May 2019 16:06

The Biggest Hurricane Risk?

What will happen to the plastic bag you threw away with lunch today? Will it sit in a landfill, clog a municipal sanitation system, or end up in your seafood? Concern over this question has helped spur the rise of the new and rapidly growing cultural trend of people aiming to live ‘Zero Waste’. The momentum of this movement has been fueled in part by an international recycling crisis between the United States and China, as described in this slightly grim article, Is this the End of Recycling?

Seeing images of injured marine animals or aerial footage of the Great Pacific Garbage Patch, shows us just how much damage this unsolved problem can cause. We can collect data from events that are occurring today to predict trends in consumption and waste reduction. We can track pilot programs of composting and trash reduction and honestly evaluate the results.

All of this sounds negative, but there is a lot of good news! More and more people are prepared to take drastic action to solve the waste and recycling problems that our country will face in the future. Like business strategies used in Business Continuity and Disaster Recovery, the Zero Waste movement tries to anticipate a future problem and attempt to mitigate its effects before they happen. To do this, we must rely on tracking real data as it occurs and test our solutions, before they become critical to operations.

...

https://www.bcinthecloud.com/2019/05/anticipating-the-trash-crisis-of-the-future/

Adaptive Business Continuity (Adaptive BC) is an alternative approach to business continuity planning, ‘based on the belief that the practices of traditional BC planning have become increasingly ineffectual’. In this article, Jean Rowe challenges the Adaptive BC approach.

We all can appreciate the intent to innovate, but innovation, in the end, must meet the needs of the consumer.  With this in mind, the Adaptive BC approach (The Adaptive BC Manifesto 2016), uses ‘innovation’ as a key message.

However, I believe that, upon reflection, the Adaptive BC approach can be viewed as the business continuity industry’s version of The Emperor’s New Clothes.

The Emperor’s New Clothes is “a short tale by Hans Christian Andersen, about two weavers who promise an emperor a new suit of clothes that they say is invisible to those who are unfit for their positions, stupid, or incompetent.”  As professional practitioners, we need to dispel the myth that using the Adaptive BC approach is, metaphorically speaking, draping the Emperor (i.e. top management) in some finely stitched ‘innovative’ business continuity designer clothes that only those competent enough can see the beauty of the design.   

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/3993-adaptive-bc-the-business-continuity-industry-s-version-of-the-emperor-s-new-clothes

As in many areas of business continuity and life, myths abound. Crisis management has them as well. In today’s post, we’ll look at five of the most pervasive.

Crisis management (CM) planning is an area where most companies may believe (or hope) they are in great shape at the same time as they should have doubts about their plans and hope they are never put to the test.

These myths have three things in common: 1) Believing in them makes people feel like they are off the hook, 2) they aren’t true, and 3) they are an obstacle to the company’s truly becoming prepared to deal with a crisis.

Here are five of the myths we encounter most frequently when out in the field:

...

https://www.mha-it.com/2019/05/08/5-myths-of-contemporary-crisis-management/

(TNS) — People were evacuated from their homes and schools were closed or delayed Wednesday after Kansas was hit with back-to-back thunderstorms.

The Kansas Turnpike was also closed south of Wellington to the Oklahoma border Wednesday morning.

Emergency management officials began evacuating people from an area about 5 miles west of Manhattan about 5 a.m., according to a report by the Associated Press.

Evacuations started in the Wichita area early Wednesday. The Weather Channel reported on its Twitter account that evacuations were ongoing in parts of Peabody and Wellington before 6 a.m. Peabody is in Marion County, about 40 miles north of Wichita. Wellington is in Sumner County, about 35 miles south of Wichita.

...

https://www.govtech.com/em/preparedness/Kansas-Flash-Floods-Force-Evacuations-Close-Schools--Highways.html

(TNS) — As Florida enters hurricane season starting June 1, the public needs to prepare for hazardous weather and ensure disaster supply kits are complete, Sarasota County officials urged in a news release. Knowing the risk, getting prepared and staying informed are just a few steps people can take to get ready for hurricane season.

Area hurricane evacuation maps have been updated, officials noted. Residents are encouraged to check the updated maps online to know their evacuation level, previously known as a "zone."

According to Sarasota County Emergency Management officials, just because you can't see water from your home doesn't mean you're not at risk for storm surge. The updated hurricane evacuation levels and storm surge maps are available online by visiting scgov.net/beprepared.

...

https://www.govtech.com/em/preparedness/Hurricane-season-draws-near-Sarasota-Herald-Tribune-Fla.html

When a critical event happens, preparedness is key.

The ever-growing threat of risks, whether natural disasters or man-made events, has put safety and resiliency top of mind for today’s organizations. Companies of all sizes have implemented mass notification systems to send alerts to employees for situations such as severe weather updates, IT alerts or organizational announcements.

Having a notification system is important, being prepared to use the system at a moment’s notice is critical. That’s why when a crisis strikes, it is better to have prewritten message scenarios ready to send rather than fumbling with the message content. However, developing prewritten messages for all kinds of events may seem daunting. Where do you start? Better yet, what do you say?

To better aide notification system admins and users, OnSolve has created the white paper, Your Alert Arsenal for Customizing and Distributing Messages during Critical Events. This critical notification resource contains over 100 pre-written example alerts for emergency and routine events. It covers a range of events in both emergency and non-emergency domains—from natural disaster alerts to customer communication notifications.

...

https://www.onsolve.com/blog/be-prepared-with-these-100-prewritten-critical-alerts/

A completely trusted stack lets the enterprise be confident that apps and data are treated and protected wherever they are.
 

With great power comes great responsibility. Just ask Spider-Man — or a 20-something system administrator running a multimillion-dollar IT environment. Enterprise IT infrastructures today are incredibly powerful tools. Highly dynamic and dangerously efficient, they enable what used to take weeks to now be accomplished — or destroyed — with a couple of mouse clicks.

In the hands of an attacker, abuse of this power can dent a company's profits, reputation, brand — even threaten its survival. But even good actors with good intentions can make mistakes, with calamitous results. Bottom line: The combination of great power with human fallibility is a recipe for disaster. So, what's an IT organization to do?

Answer: Trust the stack, not the people.

...

https://www.darkreading.com/vulnerabilities---threats/trust-the-stack-not-the-people/a/d-id/1334560

Monday, 06 May 2019 16:18

Trust the Stack, Not the People

Many business continuity professionals think of the cloud as a magical realm where nothing bad can happen. The reality is that things go wrong in the cloud all the time and as a result we must be sure to perform our due diligence in setting up our cloud-based IT/Disaster Recovery solutions.

In today’s post we’ll look at some of the common misconceptions people have about the cloud.

We’ll also talk about some things you can do to make sure this excellent “new” invention called the cloud doesn’t disappoint you when you need it most.

...

https://bcmmetrics.com/cloud-based-dr-misconceptions/

With hurricane season right around the corner, it’s never too early for businesses to start preparing for potential impact. The first line of defense in protecting your people and assets is understanding how a hurricane’s category level can help your business prepare for the worst.

But first, a quick history lesson:

In the 1970s, Miami engineer Herbert Saffir teamed up with Robert Simpson, the director of the National Hurricane Center. Their mission: develop a simple scale to measure hurricane intensity and the potential damage storms of varying strength could cause residential and business structures.

The result is the Saffir-Simpson Hurricane Wind Scale, which assigns a category level to storms based on their sustained wind speeds. The scale ranks every hurricane from 1-5, with 5 being the most intense—a storm of this magnitude will leave behind catastrophic damage in its wake.

...

https://www.alertmedia.com/blog/hurricane-categories/

Passwords are simply too vulnerable. On the dark web the underground market for passwords and other identity details is thriving. Every month at least one major hack or data leak takes place in which millions of records, including passwords, are exposed or stolen.

If a hacker gets a password and email address they simply apply the information to online platforms such as Amazon, ebay, Facebook and others, until they get a hit. It’s common practice, known as credential stuffing. According to some, many people will have upwards of 200 online accounts within a few years. How do you remember passwords for so many accounts? The savvy use password managers, however many still use the same password across all their accounts despite warnings. 

Every year BullGuard notes that surveys of the most common passwords reveal that '123456', 'password', '123456789' and 'qwerty' still make the top 10. Cyber criminals love it. They have great success using simple keyboard patterns to break into accounts online because they know so many people are using these easy-to-remember combinations.

Because of their inherent vulnerability should we be seeing the slow decline of the password? If so, what will replace it and what will we be using five years from now? This article provides some insight by looking at how today’s developments are evolving from their password roots and how they might shape the future.

...

https://www.continuitycentral.com/index.php/news/technology/3972-it-s-2024-will-passwords-have-become-obsolete

In the last few years, biometric technologies from fingerprint to facial recognition are increasingly being leveraged by consumers for a wide range of use cases, ranging from payments to checking luggage at an airport or boarding a plane. While these technologies often simplify the user authentication experience, they also introduce new privacy challenges around the collection and storage of biometric data.

In the US, state regulators have reacted to these growing concerns around biometric data by enacting or proposing legislation. Illinois was the first state to enact such a law in 2008, the Biometric Information Privacy Act (BIPA). BIPA regulates how private organizations can collect, use, and store biometric data. BIPA also enabled individuals to sue individual organizations for damages based on misuse of biometric data.

...

https://go.forrester.com/blogs/the-growing-legal-and-regulatory-implications-of-collecting-biometric-data/

With GDPR and the California Consumer Privacy Act dominating the data privacy conversation, Baker Tilly’s David Ross discusses the myriad benefits of maintaining compliance.

Recently, we saw Google fined $57 million by France in the punishments imposed for violations of the sweeping General Data Protection Regulation (GDPR) legislation passed by the European Union. Fined for not properly disclosing or alerting consumers on how their data would be used, Google’s practices ran afoul of the new data privacy laws enacted in May 2018.

Consumers and corporations alike face unfortunate repercussions when cybersecurity precautions aren’t taken seriously. Gloomy statistics and stories of well-known corporations losing customer and vendor personal information to large-scale data breaches fill the news on a near daily basis. The frequency of data breaches has increased to an unprecedented rate, and the cost continues to rise each year. A study by the Ponemon Institute reports the average cost of a data breach is up 6.4 percent since 2017 to a whopping $3.86 million.

While there is significant press surrounding the fines organizations must pay for breaches and violations, the other less apparent and often difficult-to-quantify costs can be much greater, farther reaching and longer lasting. These may include reputational damage, loss of stock value, loss of current and future customers, class action lawsuits and remediation expenses from breaches such as notification costs or credit report monitoring for affected customers.

...

https://www.corporatecomplianceinsights.com/investment-in-cybersecurity-is-key-to-minimizing-risk-and-gaining-a-competitive-edge/

Exploits give attackers a way to create havoc in business-critical SAP ERP, CRM, SCM, and other environments, Onapsis says.

Exploits targeting a couple of long-known misconfiguration issues in SAP environments have become publicly available, putting close to 1 million systems running the company's software at risk of major compromise.

Risks include attackers being able to view, modify, or permanently delete business-critical data or taking SAP systems offline, according to application security vendor Onapsis.

The exploits, which Onapsis has collectively labeled 10KBLAZE, were publicly released April 23. They affect a wide range of SAP products, including SAP Business Suite, SAP S/4 HANA, SAP ERP, SAP CRM, and SAP Process Integration/Exchange Infrastructure.

...

https://www.darkreading.com/attacks-breaches/new-exploits-for-old-configuration-issues-heighten-risk-for-sap-customers/d/d-id/1334602

And now that I have your attention… there really is a link between the two incongruous topics in the headline. Archive360’s Bill Tolson explains.

Perhaps you remember sitting through a class in high school billed as “sex education,” yet finding it dealt so indirectly with the topic that it was difficult, if not impossible, to discern the pertinent details that would help you understand what you really needed to know in this area. When faced with a real-life situation, many of us thus stumbled in blindly.

If you know anything about the General Data Protection Regulation (GDPR), then you’ll see the close analogy here. While the regulation has been in effect for almost a year now, many companies are still failing to grasp and act on the necessary details to stay compliant — the equivalent of closing their eyes and hoping for the best.

...

https://www.corporatecomplianceinsights.com/what-does-gdpr-compliance-have-in-common-with-sex-education/

New study shows SMBs face greater security exposure, but large companies still support vulnerable systems as well.

Organizations with high-value external hosts are three times more likely to have severe security exposure to vulnerabilities such as outdated Windows software on their off-premise systems versus their on-premise ones.

While external hosts at SMBs face greater exposure than larger companies, as company revenues grow so do the number of hosts and security issues affecting them, according to a new study published yesterday by the Cyentia Institute and researched by RiskRecon. The study analyzed data from 18,000 organizations and more than 5 million hosts located in more than 200 countries.

The study, Internet Risk Surface Report: Exposure in a Hyper-Connected World, identified more than 32 million security vulnerabilities, such as old Magecart ecommerce software and systems running outdated versions of OpenSSL that are vulnerable to exploits such as DROWN and Shellshock.

Wade Baker, founder of the Cyentia Institute, says the results have to be carefully analyzed. For example, 4.6% of companies with fewer than 10 employees had high or critical exposure to security vulnerabilities, versus 1.8% of companies with more than 100,000 employees. So while the 1.8% number sounds good percentage-wise, that's still many more hosts exposed.

...

https://www.darkreading.com/risk/study-exposes-breadth-of-cyber-risk/d/d-id/1334580

Thursday, 02 May 2019 14:18

Study Exposes Breadth of Cyber Risk

(TNS) — Unlicensed handgun owners would be allowed to carry their weapons — openly or concealed — in public for up to a week in any area where a local, state or federal disaster is declared, under a bill that has been overwhelmingly approved by the Texas House, 102 to 29.

House Bill 1177 by Rep. Dade Phelan, R-Beaumont, now awaits its first hearing in the Texas Senate. Phelan said he wrote the bill so gun owners don’t have to leave their firearms behind when evacuating their homes. Existing laws allow gun owners to store them in their vehicles, with some conditions.

“I don’t want someone to feel like they have to leave their firearms back in an unsecured home for a week or longer, and we all know how looting occurs in storms,” Phelan said. “Entire neighborhoods are empty and these people can just go shopping, and one of the things they’re looking for is firearms.”

Opponents say Phelan’s bill could make a bad situation worse by adding firearms to an already volatile situation.

...

https://www.govtech.com/em/preparedness/Bill-Would-Let-Unlicensed-Texans-Carry-Handguns-in-Public-After-Disasters.html

More than six months have passed since I wrote Forrester’s predictions 2019 report for distributed ledger technology (DLT, AKA blockchain). In the blockchain world, that’s ages ago.

As I keep being asked how those predictions are shaping up, and having just attended two excellent events in New York, now’s a good time to take stock. So how did we do?

Terminology shift from blockchain to DLT: I was mostly wrong but also a little bit right. What we’re seeing today is neatly reflected in the titles of the two conferences I referred to above: the EY Global Blockchain Summit and IMN’s Synchronize 2019: DLT And Crypto For Financial Institutions. In other words, in the financial services sector, the distributed ledger/DLT terminology has become predominant; there are even firms where the term “blockchain” is banned from the vocabulary altogether. Outside of this industry, though, it’s a different picture: Say “DLT” or “distributed ledger,” and you get blank stares; say “blockchain,” and eyes light up. For the same reason, many startups continue leading with “blockchain” in their marketing, even if their software lacks some of the characteristics typically associated with that descriptor. One said to me: “Blockchain is a recognized category; DLT isn’t.”

...

https://go.forrester.com/blogs/predictions-2019-blockchain/

According to hurricane research scientists at Colorado State University, early predictions for the 2019 hurricane season show a slightly below average activity level.

While this could be good news, we can’t forget about the destruction caused in the past several years from hurricanes. In 2018, fifteen named storms developed with Hurricanes Michael and Florence making landfall and causing crises for both Florida and North Caroline. The 2017 season cost more than $282 billion and caused up to 4,770 fatalities.  Whether we see two named storms or ten, preparation is your greatest ally against potential devastation.  Start by using these automated message templates for your organization’s mass notification system.

Using Hurricane Notification Message Templates

When using message templates, there are a few basic guidelines to follow. Start by keeping the message length to a minimum. This ensures recipients can get the most information in the least amount of time. In addition, SMS messages cannot exceed 918 characters; longer messages are broken up into multiple messages that may create confusion.

By creating message templates prior to severe weather, you can generate detailed and informative alerts for every step in your emergency plan. Then in the wake of a hurricane, these messages are ready to be sent to the right audiences. Recipients receive only those messages that apply to them, which helps to eliminate confusion during a stressful time.

...

https://www.onsolve.com/blog/mass-notification-alerting-templates-for-the-2019-hurricane-season/

Business Continuity Awareness Week 2019 is May 13‑17

This global event is a time to consider business continuity and the value an effective continuity management program can have for your organization.

An emergency notification system is a crucial tool in any business continuity plan. Every day, events like the following happen with no warning:

  • Hurricanes, tornadoes, and other natural disasters
  • Active shooter
  • Urban wildfire
  • Power outages
  • Cybercrime
  • Disease outbreaks
  • Workplace violence

One of the most frequent consequences of these events is limited or impaired communication, making it difficult to relay critical messages regarding safety and disaster response. Emergency notification systems have proven to be a vital tool for today’s organizations.

...

https://www.onsolve.com/blog/top-3-emergency-notification-tips-for-business-continuity-awareness-week-2019/

As corporate boards gather for annual shareholder meetings, the issues in the spotlight are defined by forces driving both business growth and risk. BDO’s Amy Rojik offers suggestions for how boards can be prepared to communicate with stakeholders this year.

For corporate boards, spring marks the arrival of annual shareholder meeting season. Every year, shareholders gather for board meetings armed with questions and concerns that, if not sufficiently addressed, may hamper their confidence in a business’s ability to manage risk and sustain long-term value creation.

In 2019, the list of issues on boards’ radars are defined by the forces driving both business growth and risk, in equal measure. This year’s key areas of shareholder concern can be grouped into four categories: digital transformation and data protection, people and culture, market movement and regulation and reporting. Here are some suggestions for how boards can address them.

Digital Transformation & Data Protection

With organizations facing increasing pressure to streamline and optimize every aspect of their business, digital transformation is at the crux of business innovation. As a result, it is nearly impossible to walk into a boardroom without hearing the phrase mentioned. And for good reason — having a digital transformation strategy is no longer optional; it is necessary for survival in today’s digital economy. Corporate boards should expect shareholders to question how much is being spent on digital transformation, who is leading the charge on strategy, what the return on investment is and how the organization compares to its peers. In communicating a digital strategy to stakeholders, linking it to clear key performance indicators (KPIs) and business objectives is critical.

...

https://www.corporatecomplianceinsights.com/culture-digital-transformation-and-other-concerns-boards-must-be-ready-to-discuss/

When we walk into our homes, we can ask our voice assistants to turn the lights on, use our faces to unlock doors and monitor our home cameras on our phones. When we travel, the planes we take now include connected blockchain-based parts that regularly alert crews for vital maintenance. Brought on by the Fourth Industrial Revolution (4IR), smart, connected technologies are helping make life easier, faster and more convenient, because they can significantly boost the intelligence and reach of one digital technology alone. However, blended artificial intelligence (AI), internet of things (IoT), blockchain and other 4IR technologies also bring infinite entry points for risk.

Imagine, for instance, how many companies are using AI for analytics that improve with use. But data errors, or bias in software or models, can misinform decisions and bring unforeseen accidents. AI-related risks have ranged from public pushback on the use of AI-based surveillance cameras, to software glitches that led to self-driving car crashes. Add to this list evolving regulations in areas like data privacy, and missed risks can be costly. A 2018 report by the Ponemon Institute estimates noncompliance costs to be 2.7 times the cost of maintaining or meeting compliance requirements — up 45 percent since 2011.

While companies race to digitally transform themselves and realize the full potential of 4IR technologies, we should pause to consider how companies can best navigate the immense risk that these blended technologies bring.

...

https://www.corporatecomplianceinsights.com/why-digitally-fit-internal-audit-compliance-functions-are-vital-for-the-fourth-industrial-revolution/

Protiviti’s Jim DeLoach discusses one of the more pervasive issues falling within senior management’s and the board’s purview. Performance relates to virtually everything important: execution of the strategy, the customer experience, investor expectations, executive compensation and even senior management and the board itself. Accurately measuring it is critical.

Performance management is so integral to the functioning of executive management and to the oversight of the board of directors that it’s easy to forget that it, too, is a process. Like all processes, it can be effective or ineffective in delivering the desired value. Given the complexity of the global marketplace, the accelerating pace of disruptive change and ever-increasing stakeholder expectations, how should executive management direct and the board oversee the performance management process so that it is effective in driving execution of the strategy and incenting the desired behaviors across the organization?

As the ultimate champion for effective corporate governance, the board engages management with emphasis on four broad themes: strategy, policy, execution and transparency. Effective performance management touches each of these themes by focusing outwardly as well as inwardly and looking to the future as well as to the present and past. The message is that, in today’s environment, the focus on performance must be anticipatory and proactive as well as reactive and interactive in focusing company resources on the pursuit of its performance goals.

Many organizations use some variation of a balanced scorecard that integrates financial and non-financial measures to communicate what’s important, focus and align processes and people with strategic objectives and monitor progress in executing the strategy. With that as a context, we are observing in the marketplace six important areas of emphasis for measuring performance:

...

https://www.corporatecomplianceinsights.com/how-to-fine-tune-performance-management-and-transform-your-organization/

Financial services firms saw upticks in credential leaks and credit card compromise as cybercriminals go where the money is.

More than one-quarter of all malware attacks target the financial services sector, which has seen dramatic spikes in credential theft, compromised credit cards, and malicious mobile apps as cybercriminals seek new ways to generate illicit profits.

It's hardly surprising to learn attackers want money; what researchers highlight in IntSights' "Banking & Financial Services Cyber Threat Landscape Report" is what they look for and how they obtain it. The first quarter of 2019 saw a 212% year-over-year spike in compromised credit cards, 129% surge in credential leaks, and 102% growth in malicious financial mobile apps.

Banks and other financial services organizations were targeted in 25.7% of all malware attacks last year – more than any of the other 27 industries tracked. Researchers point to two key events that largely shaped the modern financial services threat landscape: the shutdown of cybercriminal forum Altenen and "Collections #1-5," a major global data leak earlier this year.

...

https://www.darkreading.com/vulnerabilities---threats/credit-card-compromise-up-212--as-hackers-eye-financial-sector/d/d-id/1334562

Cavirin‘s Anupam Sahai discusses the factors that determine whether the CCPA impacts an organization, what the requirements are if so and what action you can take to prepare for it.

Just when you thought you had a handle on GDPR, businesses have a new legislation to worry about: the California Consumer Privacy Act (CCPA). The CCPA stipulates that California residents should have greater access to and control over personal information held by businesses. In particular, the law seems targeted to online social media firms (e.g., Facebook) that have been reckless with their users’ personal information over the past few years. With the number of data breaches to date, are we really that surprised that something like this is coming into effect?

CCPA will become effective on January 1, 2020, but will not be enforced until six months afterward. However, the new law enshrines a few fundamental rights for consumers to access the information that companies hold on them and to control what is collected, stored and shared within the previous 12-months. So, come July 1, 2020, if a company has collected personal information from January 1, 2019 onward, the consumer has the right to find out exactly what data a business has collected, they can opt out from the company selling their data and they have the right to ask for their data to be deleted – or, as the GDPR regulation puts it, the right to be forgotten. 

...

https://www.corporatecomplianceinsights.com/5-steps-to-prepare-for-californias-consumer-privacy-act/

Strategic Overview

Disasters disrupt preexisting networks of demand and supply. Quickly reestablishing flows of water, food, pharmaceuticals, medical goods, fuel, and other crucial commodities is almost always in the immediate interest of survivors and longer-term recovery.

When there has been catastrophic damage to critical infrastructure, such as the electrical grid and telecommunications systems, there will be an urgent need to resume—and possibly redirect— preexisting flows of life-preserving resources. In the case of densely populated places, when survivors number in the hundreds of thousands, only preexisting sources of supply have enough volume and potential flow to fulfill demand.

During the disasters in Japan (2011) and Hurricane Maria in Puerto Rico (2017), sources of supply remained sufficient to fulfill survivor needs. But the loss of critical infrastructure, the surge in demand, and limited distribution capabilities (e.g., trucks, truckers, loading locations, and more) seriously complicated existing distribution capacity. If emergency managers can develop an understanding of fundamental network behaviors, they can help avoid unintentionally suppressing supply chain resilience, with the ultimate goal of ensuring emergency managers “do no harm” to surviving capacity.

Delayed and uneven delivery can prompt consumer uncertainty that increases demand and further challenges delivery capabilities. On the worst days, involving large populations of survivors, emergency management can actively facilitate the maximum possible flow of preexisting sources of supply: public water systems; commercial water/beverage bottlers; food, pharmaceutical, and medical goods distributors; fuel providers; and others. To do this effectively requires a level of network understanding and a set of relationships that must be cultivated prior to the extreme event. Ideally, key private and public stakeholders will conceive, test, and refine strategic concepts and operational preparedness through recurring workshops and tabletop exercises. When possible, mitigation measures will be pre-loaded. In this way, private-public and private-private relationships are reinforced through practical problem solving.

Contemporary supply chains share important functional characteristics, but risk and resilience are generally anchored in local-to-regional conditions. What best advances supply chain resilience in Miami will probably share strategic similarities with Seattle, but will be highly differentiated in terms of operations and who is involved.

In recent years the Department of Homeland Security (DHS) and the Federal Emergency Management Agency (FEMA) have engaged with state, local, tribal and territorial partners, private sector, civic sector, and the academic community in a series of innovative interactions to enhance supply chain resilience. This guide reflects the issues explored and the lessons (still being) learned from this process. The guide is designed to help emergency managers at every level think through the challenge and opportunity presented by supply chain resilience. Specific suggestions are made related to research, outreach, and action.

...

https://www.fema.gov/media-library-data/1555328671083-d9422177bd55d9c6fafc327a6b239290/SupplyChainResilienceGuide-April2019.pdf

Tuesday, 30 April 2019 14:48

FEMA Supply Chain Resilience Guide

vpnMentor’s research team discovered a hack affecting 80 million American households.

Known hacktivists Noam Rotem and Ran Locar discovered an unprotected database impacting up to 65% of US households.

Hosted by a Microsoft cloud server, the 24 GB database includes the number of people living in each household with their full names, their marital status, income bracket, age, and more.

...

https://www.vpnmentor.com/blog/report-millions-homes-exposed/

(TNS) - Twenty-three men and women from Cambria, Somerset and Bedford counties graduated on Friday after a week of training by the Laurel Highlands Region Police Crisis Intervention Team.

The program, held at Pennsylvania Highlands Community College in Richland Township, included classes on suicide prevention, mental illness, strategies to de-escalate situations, dealing with juveniles and specialty courts.

“It’s critical that we give them the training,” said Kevin Gaudlip, Richland Township police detective and event coordinator. “Many of these situations are suicidal people that we encounter. In this course, officers are given the skills to effectively communicate with these people to prevent suicide.”

Police officers, 911 dispatchers, corrections officers, EMS personnel, probation officers, crisis intervention teams and others participated.

...

https://www.govtech.com/em/safety/Emergency-Personnel-Taught-De-Escalation-Techniques-at-Week-Long-Program.html

A firm’s people play essential roles in all stages of IT transformation. For companies at the beginner level of maturity, employees must come together to connect the organization. Once the organization is united, it must adopt customer-centric principles to become adaptable and reach intermediate maturity. To reach an advanced maturity level, the organization must again rely on its people to transition from being adaptable to adaptive. At each of these maturity levels, a company’s talent, culture, and structure look slightly different. The key differences in these three areas between beginner, intermediate, and advanced firms undergoing IT transformations are as follows:

...

https://go.forrester.com/blogs/the-biggest-barrier-to-it-transformation-is-people/

In previous articles, we discussed how communicable diseases and pandemics are (or are not) addressed in personal and commercialinsurance policies. Today, we’ll talk about pandemic catastrophe bonds.

The Ebola outbreak between 2014 and 2016 ultimately resulted in more than 28,000 cases and 11,000 deaths, most of them concentrated in the West African countries of Guinea, Liberia, and Sierra Leone.

The outbreak inspired the World Bank to develop a so-called “pandemic catastrophe bond,” an instrument designed to quickly provide financial support in the event of an outbreak. The World Bank reportedly estimated that if the West African countries affected by the Ebola outbreak had had quicker access to financial support, then only 10 percent of the total deaths would have occurred.

But wait, what are “catastrophe bonds” and what’s so special about a pandemic bond?

...

http://www.iii.org/insuranceindustryblog/pandemic-catastrophe-bonds/

Monday, 29 April 2019 18:31

ALL ABOUT PANDEMIC CATASTROPHE BONDS

With a year of Europe's General Data Protection Regulation under our belt, what have we learned?

There is no denying the impact of the European Union General Data Protection Regulation (GDPR), which went into effect on May 25, 2018. We were all witness — or victim — to the flurry of updated privacy policy emails and cookie consent banners that descended upon us. It was such a zeitgeist moment that "we've updated our privacy policy" became a punchline.

Pragmatically, the GDPR will serve as a catalyst for a new wave of privacy regulations worldwide — as we have already seen with the California Consumer Privacy Act (CCPA) and an approaching wave of state-level regulation from Washington, Hawaii, Massachusetts, New Mexico, Rhode Island, and Maryland.

GDPR has been a boon for technology vendors and legal counsel: A PricewaterhouseCoopers survey indicates that GDPR budgets have topped $10 million for 40% of respondents. A majority of businesses are realizing that there are benefits to remediation beyond compliance, according to a survey by Deloitte. CSOs are happy to use privacy regulations as evidence in support of stronger data protection, CIOs can rethink the way they architect their data, and CMOs can build stronger bonds of trust with their customers.

...

https://www.darkreading.com/risk/a-rear-view-look-at-gdpr-compliance-has-no-brakes/a/d-id/1334491

Security is a top concern at all levels of the organization, but especially at the board level and C-suite. SoftwareONE’s Mike Fitzgerald champions a “security-first” mentality and discusses the implications of failing to meet industry standards and regulations.

Instances of lost intellectual property (IP) due to data breaches are gaining attention in the mainstream press and in board rooms across the globe. C-suite executives are taking note of these events; security and compliance are no longer just IT issues. They are very real and very urgent business issues. Breaches and noncompliance have a major impact on business. After all, in the U.S. alone, the average data breach could cost a company upward of $7.9 million.

Compliance concerns are receiving attention from existing c-suite executives and have caused enough of a stir to lead to the creation of new roles, such as the Chief Compliance Officer (CCO), who is tasked with understanding and managing the plethora of compliance requirements that organizations must address. The CCO and the Chief Information Security Officer (CISO) need to be aware of compliance requirements on the global level (think General Data Protection Regulation (GDPR)) and on the local level (Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX)), since most organizations store at least some of their data in the cloud. The fine for a breach or lapse in compliance with an industry standard or regulation like GDPR can equal as much as 4 percent of a company’s revenue; that is potentially enough to put a company out of business. This new compliance-driven market makes it imperative to have a security-first mentality when it comes to IT decisions and a thorough understanding of the greater business implications resulting from a lack of proper security practices.

...

https://www.corporatecomplianceinsights.com/why-security-and-compliance-have-a-permanent-seat-at-the-boardroom-table/

More and more businesses are deploying applications, operations, and infrastructure to cloud environments – but many don't take the necessary steps to properly operate and secure it.

"It's not impossible to securely operate in a single-cloud or multicloud environment," says Robert LaMagna-Reiter, CISO at First National Technology Solutions (FNTS). But cloud deployment should be strategized with input from business and security executives. After all, the decision to operate in the cloud is largely driven by business trends and expectations.

One of these drivers is digital transformation. "There is a driving force, regardless of industry, to act faster, respond to customers quicker, improve internal and external user experience, and differentiate yourself from the competition," LaMagna-Reiter says. Flexibility is the biggest factor, he adds, as employees and consumers want access to robust solutions that can be updated quickly.

...

https://www.darkreading.com/cloud/how-to-build-a-cloud-security-model/d/d-id/1334552

Monday, 29 April 2019 18:26

How to Build a Cloud Security Model

When Newman, Calif., police officer, Ronil Singh, was murdered in December 2018, a Blue Alert was issued to notify the public of the dangers of a killer on the loose and to help apprehend the suspect.

The Blue Alert, a brief message issued via FEMA’s Integrated Public Alert and Warning System (IPAWS), was issued by the California Highway Patrol (CHP) in the Fresno and Merced areas where the suspect was believed to be on the run. The embedded link in the alert that contained a flyer with added information on the suspect was clicked on by more than a million cellphones within 30 minutes.

Developed by OnSolve, Blue Alert is a new addition to IPAWS to provide law enforcement officials with the ability to alert the public of injury or death of a law enforcement official. It is administered in California by the CHP, which acts on information provided by the local agency seeking to send an alert.

...

https://www.govtech.com/em/safety/Blue-Alerts-Notify-the-Public-When-Police-Are-Injured-or-Killed.html

There’s a pervasive myth out there that the marijuana industry is an unregulated Wild West populated by desperadoes and mountebanks out to score a quick buck.

But even a passing familiarity with how the industry operates in states with legal recreational and medical marijuana should be enough to dispel that myth. Marijuana operations are subject to extremely strict licensing requirements and regulatory oversight. Every player in the marijuana supply chain is tightly controlled – from cultivators to retail stores to, yes, the buyers themselves.

In fact, a recent analysis from workers compensation insurer Pinnacol Assurance suggests that the industry’s strict regulatory oversight may also be the reason why it’s a safe industry to work in.

...

http://www.iii.org/insuranceindustryblog/safe-work-marijuana/

What does the future hold? This year on 28 April, the World Day for Safety and Health at Work draws attention to the future of work and reminds us of the importance of ISO solutions in combating work-related injuries, diseases and fatalities worldwide.

Health and safety at work likely isn’t an issue that’s top of mind on a daily basis. Yet, for millions of workers across the globe, their jobs can put them in some extremely high-risk environments where valuing safety can mean the difference between life and death.

Organized by the International Labour Organization (ILO), the World Day for Safety and Health at Work aims to raise awareness of the importance of occupational health and safety and build a culture of prevention in the workplace. This year’s theme looks to the future for continuing these efforts through major changes such as technology, demographics, sustainable development, and changes in work organization.

...

https://www.iso.org/news/ref2385.html

In the wake of a reported ransomware attack on global manufacturing firm Aebi Schmidt, Peter Groucutt outlines the steps companies should take to prepare for such incidents. A clear cyber incident response plan and maintaining frequent communication are critical.

The details of the attack on Aebi Schmidt remain light at this stage, but early reports suggest it was severe, with systems for manufacturing operations left inaccessible. The manufacturing sector has recently seen a number of targeted ransomware attacks using a new breed of ransomware known as LockerGoga. Norwegian aluminium producer Norsk Hydro and French engineering firm Altran have been hit in Europe. In the US, chemicals company Hexion was also attacked. The reasoning for these targets is clear – paralysing the IT systems for these businesses has an immediate effect on their production output. That means significant losses, potentially millions of dollars per day. Unlike mass ransomware attacks that might net the attacker a few hundred pounds, the ransom is correspondingly higher.

If you are hit by a ransomware attack, you have two options. You can either recover the information from a previous backup or pay the ransom. However, even if you pay the ransom, there is no guarantee you will actually get your data back, so the only way to be fully protected is to have historic backup copies of your data. When recovering from ransomware, your aims are to minimise both data loss and IT downtime. Defensive and preventative strategies are essential but outright prevention of ransomware is impossible. It is therefore vital to plan for how the organization will act when compromised to reduce the impact of attacks. Having an effective cyber incident response plan in place is critical to your recovery.

...

https://www.continuitycentral.com/index.php/news/technology/3947-lessons-from-a-ransomware-attack

Friday, 26 April 2019 14:59

Lessons from a Ransomware Attack

Sea level rise, and its perils, is often associated with the East Coast. But California communities along the coast that don’t prepare for what’s ahead could be inviting disasters of the magnitude not yet seen in the state.

A report by the United States Geological Survey Climate Impacts and Coastal Processes Team suggests that future sea level rise, in combination with major storms like the ones the state is experiencing now, could cause more damage than wildfires and earthquakes.

This is the first study that looks not just at sea level rise in California, but also sea level rise, along with a major storm to assess total risk to coastal communities.

...

https://www.govtech.com/em/preparedness/Sea-Level-Rise-Plus-Modern-Storms-Equals-Devastation-in-California-.html

Spam has given way to spear phishing, cryptojacking remains popular, and credential spraying is on the rise.

The time it takes to detect the average cyberattack has shortened, but  cyberattackers are now using more subtle techniques to avoid better defenses, a new study of real incident response engagements shows.

Victim organizations detected attacks in 14 days on average last year, down from 26 days in 2017. Yet, attackers seem to be adapting to evade the greater vigilance: Spam, while up slightly in 2018, continues to account for far less of e-mail volume than during every other year in the past decade, and techniques such as hard-to-detect cryptojacking and low-volume credential spraying are becoming more popular, according to Trustwave's newly published Global Security Report

Other stealth tactics—such as code obfuscation and "living off the land," where attackers use system tools for their malicious aims—are also coming into greater use, showing that attackers are changing their strategies to avoid detection, says Karl Sigler, threat intelligence manager at Trustwave's SpiderLabs. 

...

https://www.darkreading.com/vulnerabilities---threats/cyberattackers-focus-on-more-subtle-techniques/d/d-id/1334545

(TNS) — Teenagers and adults lined up in the Jerome High School gym, ready to receive medication, while police stood guard outside.

The exercise was part of a four-day simulation, organized by the South Central Public Health District and Jerome County Office of Emergency Management, to prepare for a potential anthrax or other bioterrorism attack. The exercise coincided with similar exercises in Idaho’s six other public health districts.

The South Central Public Health District holds large-scale simulations every few years, district director Melody Bowyer said, and smaller exercises annually.

“One of our very important missions for public health is to protect and prepare the community for a real health threat,” such as a disease outbreak, natural disaster or bioterrorism attack, Bowyer said.

...

https://www.govtech.com/em/preparedness/There-Wasnt-an-Anthrax-Attack-in-Jerome-but-These-Folks-Pretended-Like-There-Was.html

For 74 minutes, traffic destined for Google and Cloudflare services was routed through Russia and into the largest system of censorship in the world, China's Great Firewall.

On November 12, 2018, a small ISP in Nigeria made a mistake while updating its network infrastructure that highlights a critical flaw in the fabric of the Internet. The mistake effectively brought down Google — one of the largest tech companies in the world — for 74 minutes.

To understand what happened, we need to cover the basics of how Internet routing works. When I type, for example, HypotheticalDomain.com into my browser and hit enter, my computer creates a web request and sends it to Hypothtetical.Domain.com servers. These servers likely reside in a different state or country than I do. Therefore, my Internet service provider (ISP) must determine how to route my web browser's request to the server across the Internet. To maintain their routing tables, ISPs and Internet backbone companies use a protocol called Border Gateway Protocol (BGP).

...

https://www.darkreading.com/cloud/how-a-nigerian-isp-accidentally-hijacked-the-internet/a/d-id/1334482

(TNS) — Rather than let FEMA trailers sit empty at the Bay County Fairgrounds group site and the staging area in Marianna, Panama City, Fla., is asking to be given the opportunity to put people in them.

City Manager Mark McQueen said the city is negotiating with the Federal Emergency Management Agency to try to acquire the surplus trailers. As of last week, there were more than 50 empty trailers at the fairgrounds campsite, according to FEMA reports, in addition to the ones that were staged in Marianna and never rolled out for use.

"Those have gone unclaimed because FEMA has been unable to make contact with those survivors," McQueen said at the recent City Commission meeting. "Knowing that there are some already established in our group sites and that there are another 70 up in Marianna that are not yet placed, we are striving to get those donated to the city."

The hope, according to McQueen, is to get 100 trailers that city officials can offer as interim housing to people who have fallen through the cracks.

...

https://www.govtech.com/em/disaster/Panama-City-to-FEMA-Give-us-Empty-Trailers-to-House-Hurricane-Michael-Victims.html

Most companies that underinvest in business continuity can give you a reason why they do so, but those reasons are almost always ill-founded. In today’s post, we’ll look at the most common rationales organizations give for skimping on BC—and show you the reality behind those same topics.

In working as a business continuity consultant, I’ve had the opportunity to become familiar with companies that come from across the spectrum in terms of the level of their BC planning. This includes many organizations with stellar programs and also many that do not fully implement their BC plan or have no BC program at all.

The companies that skimp on BC are almost always very articulate in explaining why they think it’s not worthwhile for them to develop a robust BCM program. However, the reasons they give are almost always based on false assumptions and incomplete information.

...

https://www.mha-it.com/2019/04/24/bad-reasons-for-skimping-on-bcm/

(TNS) - In Congress, battles are raging over disaster relief spending. Who should get the help? Puerto Rico, still seeking emergency reconstruction money in the wake of 2017 Hurricane Maria (and yes, Puerto Rico is part of the United States and just as deserving of help as, say, North Carolina)? How about Hawaii, where volcanic eruptions have seen molten lava destroy homes, roads and other infrastructure? Nebraska and Iowa, which were inundated by some of the worst flooding in their history? California, trying to rebuild from the most widespread and deadly wildfires the state has ever seen? Or the Florida Panhandle and parts of Georgia, where homes and farms were wiped out by the violent Hurricane Michael last year?

All those disasters and more — they are a signature national wound of the 21st century, a growing roster of attacks by natural forces that are unprecedented in their power and frequency. The object of current congressional fisticuffs is a $13 billion disaster aid package that tries to address many of those violent and devastating acts of nature. And it's not nearly enough to repair what's been broken, let alone do what's needed to prepare for a future that's likely filled with more such fire, wind and water.

Government at every level should have seen it coming, two or three decades ago. That's when we first became aware that climate change had begun, with warmer air and water temperatures and changing weather patterns that were producing more and bigger storms, and droughts where the land was once verdant. As The Washington Post reported this week, taxpayer spending on federal disaster relief funds is almost 10 times greater than it was three decades ago — and that's adjusted for inflation.

...

https://www.govtech.com/em/preparedness/EDITORIAL-To-Address-the-New-Normal-Disaster-Relief-Needs-to-Start-With-Prevention.html

Today's application programming interfaces are no longer simple or front-facing, creating new risks for both security and DevOps

All APIs are different inside, even if they're using similar frameworks and architectures, such as REST. Under whatever architectural "roof," the data protocols are always different — even when the structure is the same.

You've likely heard of specific protocol formats, such as REST, JSON, XML, and gRPC. These are actually data formatting and transportation languages that act as APIs' spokes. Inside those formats is a lot of variation. These formatting languages are less "language" and more like airplanes that carry ticketed passengers that move through airports to get where they need to be. The languages passengers speak and their individual cultural details are highly different.

From a security perspective, the protocol itself does nothing. To be effective, security needs to translate the language and intention of each person coming through, not just let the passengers navigate freely.

...

https://www.darkreading.com/5-security-challenges-to-api-protection/a/d-id/1334475

Thursday, 25 April 2019 14:11

5 Security Challenges to API Protection

(TNS) — With a lot of hard work by the Shoalwater Bay Tribe, a vertical tsunami evacuation tower near Tokeland should be ready for “the big one” by the end of October 2020.

Shoalwater Bay emergency management director Lee Shipman said none of it would have been possible without a core group of driven individuals, particularly previous emergency managers like Dave Nelson and George Crawford.

“We wouldn’t have gotten the (grant) application done without their expertise,” said Shipman. “We are all passionate; we’re kind of like a tsunami evacuation tower gang.”

Nelson and Crawford were instrumental in forming the tribe’s emergency management plans. There are two tsunami warning sirens on the reservation; the one on the north end is named George, after Crawford; the one at the south end — off Blackberry Lane, next to where the evacuation tower will stand — is named Dave, after Nelson.

...

https://www.govtech.com/em/preparedness/Design-Work-Underway-for-Shoalwater-Tsunami-Evacuation-Tower.html

The Committee on Foreign Investment in the United States (CFIUS) recently forced the Chinese owner of dating app Grindr to divest its ownership interest, citing national security concerns. Fox Rothschild’s Nevena Simidjiyska explains what the decision means for companies who carry personal data going forward.

A new law has expanded the oversight powers of the Committee on Foreign Investment in the United States (CFIUS), and businesses are quickly learning that the interagency committee won’t hesitate to block a deal or force the divestment of a prior acquisition, particularly one involving sensitive customer data or “critical technologies” in industries ranging from semiconductors to social media.

Within the past two years, CFIUS blocked the acquisition of U.S. money transfer company MoneyGram International Inc., as well as a deal in which Chinese investors aimed to acquire mobile marketing firm AppLovin.

...

https://www.corporatecomplianceinsights.com/cfius-flexes-new-muscles-where-customer-data-and-critical-technology-are-involved/

Rising to the cyber challenge

Our third Hiscox Cyber Readiness Report provides you with an up-to-the-minute picture of the cyber readiness of organisations, as well as a blueprint for best practice in the fight to counter the ever-evolving cyber threat.

Barely a week goes by without news of a major cyber incident being reported, and the stakes have never been higher. Data theft has become commonplace; the scale of ransom demands has risen steadily; and cumulatively the environment in which businesses must operate is increasingly hostile. The cyber threat has become the unavoidable cost of doing business today.

This is our third Hiscox Cyber Readiness Report and, for the first time, a significant majority of firms surveyed said they experienced one or more cyber attacks in the last 12 months. Both the cost and frequency of attacks have increased markedly compared with a year ago, and where hackers formerly focused mainly on larger companies, small-and-medium -sized firms are now equally vulnerable.

...

https://www.hiscox.co.uk/cyberreadiness#

Wednesday, 24 April 2019 14:20

The Hiscox Cyber Readiness Report 2019

(TNS) - A warming Earth may add slightly more muscle to heat-hungry hurricanes, but also slash the number that form by 25 percent by the end of the century as drier air dominates the middle levels of the atmosphere.

According to a presentation given this week at the National Hurricane Conference in New Orleans, climate change is expected to intensify storms by about 3 percent, or a few miles per hour, by the year 2100.

Global warming likely added 1 percent to Hurricane Michael's Cat 5 power, or 1 to 2 mph, said Chris Landsea, tropical analysis forecast branch chief at the National Hurricane Center.

"That is a fairly small increase and most of the computer guidance by global warming models say maybe we could see 3 percent stronger by the end of the century," said Landsea, who spoke during a session on hurricane history. "That's really not very much."

...

https://www.govtech.com/em/preparedness/Climate-Change-Slightly-Stronger-but-Fewer-Hurricanes-One-Expert-Says.html

Stopping malware the first time is an ideal that has remained tantalizingly out of reach. But automation, artificial intelligence, and deep learning are poised to change that.

The collective efforts of hackers have fundamentally changed the cyber defense game. Today, adversarial automation is being used to create and launch new attacks at such a rate and volume that every strain of malware must now be considered a zero day and every attack considered an advanced persistent threat.

That's not hyperbole. According to research by AV-Test, more than 121.6 million new malware samples were discovered in 2017. That is more than 333,000 new samples each day, more than 230 new samples each minute, nearly four new malware samples every second.

...

https://www.darkreading.com/vulnerabilities---threats/when-every-attack-is-a-zero-day/a/d-id/1334468

Wednesday, 24 April 2019 14:16

When Every Attack Is a Zero Day

The NYDFS cybersecurity requirements, first enacted in 2017, are now fully in place and helping to address glaring shortcomings in data security. OneSpan’s Michael Magrath provides a quick recap of the fourth and final phase of mandates to help organizations ensure they’re up to speed.

New York’s reputation as the “financial capital of the world” is legendary. The New York State Department of Financial Services (NYDFS) regulates approximately 1,500 financial institutions and banks, as well as over 1,400 insurance companies, and the overwhelming majority of financial institutions conducting business in the U.S. fall under NYDFS regulation – including international organizations operating in New York.

The NYDFS Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500), first enacted in 2017, are now fully in place, and all banks and financial services companies operating in the state must secure their assets and customer accounts against cyberattacks in compliance with its mandates.

The regulation requires financial institutions to implement specific policies and procedures to better protect user data and to implement effective third-party risk management programs with specific requirements – both digital and physical.

...

https://www.corporatecomplianceinsights.com/nydfs-cybersecurity-requirements-are-now-fully-mandatory-are-you-ready/

Even more are knowingly connecting to unsecure networks and sharing confidential information through collaboration platforms, according to Symphony Communication Services.

An alarming percentage of workers are consciously avoiding IT guidelines for security, according to a new report from Symphony Communication Services.

The report, released this morning, is based on a survey of 1,569 respondents from the US and UK who use collaboration tools at work. It found that 24% of those surveyed are aware of IT security guidelines yet are not following them. Another 27% knowingly connect to an unsecure network. And 25% share confidential information through collaboration platforms, including Skype, Slack, and Microsoft Teams.  

While the numbers may at first appear alarming, there's another way to look at them, says Frank Dickson, a research vice president at IDC who covers security.

"What I see is a large percentage of workers who view security as an impediment," Dickson says. "When security gets in the way of workers getting their jobs done, people will go around security. Companies need to provide better tools so people can be more effective."

...

https://www.darkreading.com/threat-intelligence/1-in-4-workers-are-aware-of-security-guidelines---but-ignore-them/d/d-id/1334492

(TNS) - After the apocalyptic Camp Fire reduced most of Paradise to ashes last November, a clear pattern emerged.

Fifty-one percent of the 350 houses built after 2008 escaped damage, according to an analysis by McClatchy. Yet only 18 percent of the 12,100 houses built before 2008 did.

What made the difference? Building codes.

The homes with the highest survival rate appear to have benefited from “a landmark 2008 building code designed for California’s fire-prone regions – requiring fire-resistant roofs, siding and other safeguards,” according to a story by The Sacramento Bee’s Dale Kasler and Phillip Reese.

When it comes to defending California’s homes against the threat of wildfires, regulation is protection. The fire-safe building code, known as the 7A code, worked as intended. Homes constructed in compliance with the 2008 standards were built to survive.

...

https://www.govtech.com/em/preparedness/EDITORIAL-Build-to-Survive-Homes-in-Californias-Burn-Zones-Must-Adopt-Fire-Safe-Code.html

Grounded Boeing Angers A Whole Value Chain

Boeing’s having a tough run. The self-proclaimed world’s largest aerospace company is under “intense scrutiny” after two crashes involving its 737 MAX jets, with governments around the world grounding planes, massively affecting travel and airline operations. Boeing finds itself in the center of a terrible storm of angry consumers, buyers, and regulators.

Not The First Time . . . But The Worst Time

This isn’t the first time Boeing planes have crashed — but PR-wise, it’s the worst. What’s different serves as caution for all leaders, regardless of industry. The zeitgeist has changed: No company is immune to the demands of empowered customers, not even B2B companies like Boeing. In Boeing’s case, the empowered customers are not just airlines but also the flying public. B2B companies never really had to worry about public scrutiny with its volatile fury. In an industry’s value chain, they played safely in the background, behind their B2C buyer. In this case, airline manufacturers historically didn’t interact with passengers post-crash but instead worked with regulators. A US presidential tweet hurled the issue into the public realm, a virtual court whose norms disregard protocol.

...

https://go.forrester.com/blogs/boeing-case-reveals-b2bs-not-immune-to-volatile-brand-crises/

Don't let social media become the go-to platform for cybercriminals looking to steal sensitive corporate information or cause huge reputational damage.

Social media has become the No. 1 marketing tool for businesses, with 82% of organizations now using social media as a key communication and promotional tactic. It has become the window to a business, enabling companies to build a following, engage with clients and consumers, and share news and updates in a cost-effective way.

While social media can be a great tool, there are also a number of associated security threats. Just by having a presence on the platforms, organizations of all sizes put themselves at risk.

...

https://www.darkreading.com/vulnerabilities---threats/4-tips-to-protect-your-business-against-social-media-mistakes/a/d-id/1334417

Sometimes problems result when the IT department does its own recovery planning then BC comes along and conducts an analysis that shows IT’s plans to be inadequate. In today’s post, we’ll look at why this gap in recovery strategies is dangerous and what you as a business continuity professional can do to narrow it.

 
THE IT DEPARTMENT GOES IT ALONE

The lack of alignment on key recovery objectives between IT and the business continuity team can lead to catastrophic impacts to customer service, operations, shareholder value, and other areas in the event of a critical disruption.

However, this is an area where the IT department deserves a good amount of sympathy and understanding from the BC team.

The problem starts when the IT team sets about working on its own to develop recovery plans for the organization’s systems and applications. Often they are told to do this by management, and they typically do the work in a silo, with minimal cooperation from other departments.

In devising their recovery plans, the IT department is usually flying blind because they have a limited view of the larger needs of the organization.  

...

https://bcmmetrics.com/eliminating-recovery-strategy-gaps/

Sixty-four percent of global security decision makers recognize that improving their threat intelligence capabilities is a high or critical priority. Nevertheless, companies across many industries fail to develop a strategy for achieving this. Among the many reasons why organizations struggle to develop a threat intelligence capability, two stand out: Developing a mature threat intelligence program is expensive, and it’s difficult to determine viable protections without a cohesive message of what works effectively. Fortunately, the digital risk protection (DRP) market provides a solution to the threat intelligence problem for both enterprises and small-to-medium businesses (SMBs) alike.

Digital risk protection services substantially improve an organization’s ability to mitigate risk by providing the organization with actionable and relevant intelligence. By simulating an outsider’s perspective of an organization’s digital presence, security professionals working for the organization can better determine which of their assets are most at risk and develop solutions to better protect those assets. Additionally, DRP services can be utilized to protect a company’s reputation by scouring the web for instances of data fraud, breaches, phishing attempts, and more.

...

https://go.forrester.com/blogs/understanding-the-evolving-drp-market/

Monday, 22 April 2019 16:40

Understanding The Evolving DRP Market

Compliance has yet to adopt a proper management system to substantiate the critical role they play. SEI’s Kevin Byrne discusses how, rather than continuing to raise compliance issues as they occur, CCOs should graduate to consistent, ongoing management-level reporting.

Compliance programs today are at an interesting crossroads. In 2004, the SEC adopted rule 206(4)-7, requiring all registered investment companies and investment advisers to adopt and implement written policies and procedures reasonably designed to prevent violation of the federal securities laws. Firms learned they had to review those policies and procedures annually for their adequacy and the effectiveness of their implementation and to designate a chief compliance officer (CCO) to administer the policies and procedures. Thus, the compliance program as we know it today was born.

Firms hired CCOs and tasked them with creating programs to protect investors and comply with federal securities laws. CCOs built their programs with the tools of the time – principally Microsoft Office – and while there is more experience to draw from, they largely continue to manage their programs the same way today. Policies and procedures are maintained in MS Word. Risk assessments are maintained in Excel. Communications are stored in Outlook. Documentation is maintained on shared drives or in SharePoint.

...

https://www.corporatecomplianceinsights.com/compliance-program-evolution-the-need-for-a-compliance-management-system/

Recent studies show that before automation can reduce the burden on understaffed cybersecurity teams, they need to bring in enough automation skills to run the tools.

Cybersecurity organizations face a chicken-and-egg conundrum when it comes to automation and the security skills gap. Automated systems stand to reduce many of the burdens weighing on understaffed security teams that struggle to recruit enough skilled workers. But at the same time, security teams find that a lack of automation expertise keeps them from getting the most out of cybersecurity automation. 

A new study out this week from Ponemon Institute on behalf of DomainTools shows that most organizations today are placing bets on security automation. Approximately 79% of respondents either use automation currently or plan to do so in the near-term future.

For many, automation investments are justified to management as a way to beat back the effects of the cybersecurity skills gap that some industry pundits say has created a 3 million person shortfall in the industry. Close to half of the respondents to Ponemon's study report that the inability to properly staff skilled security personnel has increased their organizations' investments in cybersecurity automation. 

...

https://www.darkreading.com/threat-intelligence/the-cybersecurity-automation-paradox/d/d-id/1334470

Monday, 22 April 2019 16:38

The Cybersecurity Automation Paradox

Archived data great for training and planning

By GLEN DENNY, Baron Services, Inc.

Historical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred.

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

public safety historical weather dataThe optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

Bidle TrevorBy TREVOR BIDLE, information security and compliance officer, US Signal

World Backup Day purposely falls the day before April Fool’s Day. The founders of the initiative, which takes place March 31, want to impress upon the public that the loss of data resulting from a failure to back up is no joke.

It’s surprising to find that nearly 30 percent of us have never backed up our data. Even more shocking are studies stating that only four in ten companies have a fully documented disaster recovery (DR) plan in place. Of those companies that have a plan, only 40 percent test it at least once a year.

Data has become an integral component of our personal and professional lives, from mission-critical business information to personal photos and videos. DR plans don’t have to be overly complicated. They just need to exist and be regularly tested to ensure they work as planned.

Ahead of World Backup Day, here are some of the key components to consider in a DR plan.

The Basics of Backup

A backup creates data copies at regular intervals that are saved to a hard drive, tape, disk or virtual tape library and stored offsite. If you lose your original data, you can retrieve copies of it. This is particularly useful if your data became corrupted at some point. You simply “roll back” to a copy of the data before it was corrupted.

Other than storage media costs, backup is relatively inexpensive. It may take time for your IT staff to retrieve and recover the data, however, so backup is usually reserved for data you can do without for 24 hours or more.  It doesn’t do much for ensuring continued operations.

Application performance can also be affected each time a backup is done. However, backup is a cost-effectives means of meeting certain compliance requirements and for granular recovery, such as recovering a single user’s emails from three years ago. It serves as a “safety net” for your data and has a distinct place in your DR plan.

You can opt for a third-party vendor to handle your backups. For maximum efficiency and security, companies that offer cloud-based backups many be preferable. Some allow you to backup data from any physical or virtual infrastructure, or Windows workstation, to their cloud service. You can then access your data any time, from anywhere. Some also offer backups as a managed service, handling everything from remediation of backup failures to system/file restores to source.

Stay Up-To-Date with Data Replication

Like backup, data replication copies and moves data to another location. The difference is that replication copies data in real- or near-real time, so you have a more up-to-date copy.

Replication is usually performed outside your operating system, in the cloud. Because a copy of all your mission-critical data is there, you can “failover” and migrate production seamlessly. There’s no need for wait for backup tapes to be pulled.

Replication costs more than backup, so it’s often reserved for mission-critical applications that must be up and running for operations to continue during any business interruption. That makes it a key component of a DR plan.

Keep in mind is that replication copies every change, even if the change resulted from an error or a virus. To access data before a change, the replication process must be combined with continuous data protection or another type of technology to create recovery points to roll back to if required. That’s one of the benefits of a Disaster Recovery as a Service (DRaaS) solution.

Planning for Disasters

DRaaS solutions offer benefits that make them an attractive option for integrating into a DR plan. By employing true continuous data protection, a DRaaS solution can offer a recovery point objective (RPO) of a few seconds. Applications can be recovered instantly and automatically — in some cases with a service level agreement (SLA) based RTO of minutes.

DRaaS solutions also use scalable infrastructure, allowing virtual access of assets with little or no hardware and software expenditures. This saves on software licenses and hardware. Because DRaaS solutions are managed by third parties, your internal IT resources are freed up for other initiatives. DRaaS platforms vary, so research your options to find the one that best meets your needs.

A DR plan is basically a data protection strategy, one that contains numerous components to help ensure the data your business needs is there when it is needed — even if a manmade or natural disaster strikes.

Trevor Bidle is information security and compliance officer for US Signal, the leading end-to-end solutions provider, since October 2015. Previously, Bidle was the vice president of engineering at US Signal. Bidle is a certified information systems auditor and is completing his Masters in Cybersecurity Policy and Compliance at The George Washington University.

ERAU students generate forecasts with eye-catching daily graphics

Embry-Riddle Aeronautic University (ERAU) decided to amp up its broadcast meteorology classes with professional weather graphics and precision storm tracking tools that can be used to illustrate complex weather conditions and explain weather concepts to students. The customizable graphics platform enables the university to incorporate a range of other available weather data and create graphics that work well in the classroom environment. Providing daily weather graphics every day, including holidays, helps the university tell the most important national and regional weather story of the day. Expanding the tools student forecasters have on hand, the weather platform provides exceptional analysis and learning opportunities.

First used for broadcast meteorology classes, the new graphic system is now being used for weather analysis and forecasting, aviation weather, and tropical meteorology classes. ERAU continues to expand its use to create more content for the website and as a teaching tool for student pilots and a variety of other situations. And students are sitting up and taking notice. Enrollment in broadcast meteorology classes has more than doubled since they began using the new tools.

Explanations work better with good graphics

Robert Eicher, Assistant Professor of Meteorology, was searching around for a high quality instructional weather analysis and graphics system for his broadcast meteorology class. Before coming to ERAU, Eicher had worked as a television weather broadcaster for two decades. He knew the power of good graphics in explaining weather to audiences and was looking to extend that to his students.

“Lectures are usually accompanied by PowerPoint presentations with a lot of words,” Eicher explains. “As they say, a picture is worth a thousand words – it is easier to explain what’s going on if you have a good graphic. And animated graphics go a lot farther for illustrating what we are teaching about weather.”

Professor Eicher began shopping around for a weather analysis system that would fit into an instructional environment. After looking at available options, he eventually opted for Baron Lynx™, which combines weather graphics, weather analysis and storm tracking into a single platform. He had familiarity with Baron weather products, having used them at television stations in Orlando, Florida and Charlotte, North Carolina.

The Lynx platform includes several components. One area is dedicated to weather analysis, where students analyze the weather data cross the continental United States. Another area enables students to assemble and prepare the weather show and deliver it during a weather cast. The third is a creative component dedicated to weather graphics, and allows students to generate new weather graphics using existing graphical elements or by creating entirely new artwork.

Lynx was developed with the direct input of more than 70 broadcast professionals, including meteorologists and news directors. When introduced in 2016, Lynx garnered rave reviews for telling captivating weather stories and dominating station-defining moments. TV stations liked that Lynx offered them a scalable architecture that they could configure specifically to their own needs. With that came an arsenal of tools, including wall interaction, instant social media posting, forecast editing, daily graphics, and of course storm analysis. Integration across all platforms – on-air, online, and mobile – was another big plus for weather news professionals.

For Professor Eicher, the two deciding factors in favor of selecting Lynx were value for the money and customizability. “Compared to other options I looked at, you get a lot more for your money – a bigger bang for the buck. I also liked the customizability, which works well for our unique situation. As a university, we are already getting a ton of data from an existing National Oceanic and Atmospheric Administration (NOAA) data port. I like that Lynx allows us to incorporate the data we are getting and make good graphics with it. We can get in and tinker around and do some innovative things for the classroom environment.”

One unique example involved teaching aviation school students about the potential for icing. Eicher went into Lynx and adjusted contours at an atmospheric air pressure of 700 millibars (at 10,000 feet) to show only the 32 degree line, so the students could see where the freezing level was at 10,000 feet. He then adjusted the contours of relative humidity that were 75 percent and above. The result illustrated where the temperature and humidity combined to produce ice, showing the icing potential at that flying level. “It is a unique graphic that I don’t think anyone else has,” noted Eicher.

Baron4 3 1The program is being used for weather analysis and forecast and also enables broadcast meteorology students to publish their forecasts and make them visible to people outside the classroom. “In the past, students would have written their forecasts and only their professor would see it,” said Professor Eicher. “Now the class has a clear purpose. Student meteorologists use Lynx to prepare weather analyses and forecasts and publish the results to the ERAU website using the Baron Digital Content Manager (DCM) portal.” 

While not a part of Lynx, the DCM is a web portal that communicates with Lynx. Using the DCM, meteorologists can update forecasts remotely and publish them across mobile platforms and websites. It is accessible to anyone who has credentials: students can log in from their home, lab, or class and enter the data. The DCM forecast builder feature allows users to populate their forecast, select weather graphics associated with specific forecast conditions using a spreadsheet-like form for the data entry, and publish them to the ERAU website. The forecast graphics and the resulting format are predefined during system setup.

On weekends, holiday breaks, or summer vacation, the DCM can be set to revert to the National Weather Service (NWS) forecast, solving the problem of what to do if students are not there to issue a forecast. Eicher considers this a feature that would be extremely useful for any university, because it means a current forecast will always appear on the website. According to Professor Eicher, “The ability to update the forecast via our web portal provided a solution for a need that had been unmet for five years or more.”

Baron4 3 2

Teaching Assistant Michelle Hughes uses Lynx to prepare weather analyses and forecasts and publish to the ERAU website.

In general, Eicher has found a lack of good real time weather instructional material, so he has turned to the Lynx program to develop better teaching tools. In addition to the original broadcast meteorology course, he and other instructors are also using the program for aviation weather and tropical meteorology classes. He anticipates it will soon be used to develop instructional graphics for an introduction to meteorology course. For example, Lynx will allow instructors to move beyond just a still image of information on upper level winds that show current wind patterns and then animate the winds with moving arrows. This type of animation clearly illustrates conditions and highlights areas where attention should be focused.

Baron4 3 3

ERAU is also using the program to develop other high quality instructional materials, including animated graphics that can be used to explain important regional and national weather events, for example, the recent California wildfires.

Positive feedback for new teaching tool

ERAU faculty and administration are extremely pleased with the availability of the new teaching tool for broadcast and meteorology students, and student pilots. Located in a broadcast studio that is part of the meteorology computer lab, Baron Lynx is accessible to the entire meteorology faculty and students, with output connected to adjacent classrooms. Enrollment in broadcast meteorology classes has more than doubled since ERAU obtained these new tools.

Support and training on the product have been provided at a high level. The Baron technical support staff is used to supporting televisions stations 24/7/365, so were not thrown off by students calling them on a Saturday afternoon with questions on how to produce graphics for their forecasts. The students showed off their new knowledge on a live Facebook stream the day before Thanksgiving on travel weather.

Eicher also gave high grades to the staff training provided. “The staff person brought in to train me on use of the program actually assisted with teaching the broadcast meteorology class, showing the students how to use the program directly.”

Customizable graphics product ideal for classroom environment

The customizable Lynx product enables the university to incorporate a range of other available weather data and create graphics ideal for the classroom environment.

The university is also looking into developing a range of other graphics for use on their new website, as well as creating more content using Lynx for educational purposes. Also in the planning stages is consideration of hooking in other camera sources like a roof/sky camera into the Lynx program, combined with weather data. “Word is getting out that we have a pretty unique opportunity,” concludes Professor Eicher.

It was a balmy 67-degree day in New York on March 15, which prompted the inevitable joke that since it’s warm outside, then climate change must be real. The wry comment was made by one of the speakers at the New York Academy of Science’s symposium Science for decision making in a warmer word: 10 years of the NPCC.

The NPCC is the New York City Panel on Climate Change, an independent body of scientists that advises the city on climate risks and resiliency. The symposium coincided with the release of the NPCC’s 2019 report, which found that in the New York City area extreme weather events are becoming more pronounced, high temperatures in summer are rising, and heavy downpours are increasing.

“The report tracks increasing risks for the city and region due to climate change,” says Cynthia Rosenzweig, co-chair of the NPCC and senior research scientist at Columbia University’s Earth Institute. “It continues to lay the science foundation for development of flexible adaptation pathways for changing climate conditions.”

...

http://www.iii.org/insuranceindustryblog/new-york-citys-climate-change-resiliency/

Thursday, 28 March 2019 19:52

NEW YORK CITY’S DISASTER RESILIENCY

Where do I start?

This is a conversation and situation I’ve had many times with different people, and it may feel familiar to some of you. You’ve been tasked with developing a BC/DR program for your organization. Assuming you have nothing or little in place, and what you do have is so out of date that you’re feeling that it would be wise to start fresh. The question invariably comes up: Where do I start?

Depending on your training or background this may start with a Business Impact Analysis (BIA) in order to prioritize and analyze your organization’s critical processes. If you have a security or internal audit background you may feel inclined to start with a Risk Assessment. You may have an IT background and feel that your application infrastructure is paramount, and you need a DR program immediately. If you’ve come from the emergency services or military, life and safety might be at the foremost in your mind and emergency response and crisis management might be the first steps. I’ve seen clients from big pharmaceuticals that need to prioritize their supply chain as their number one priority.

The reality is that although there are prescribed methodologies with starting points outlined in best practices by various institutes and organizations with expertise in the field, there is only one expert when it comes to your organization. You.

...

https://www.bcinthecloud.com/2019/03/business-continuity-methodology/

How do you create an insights-driven organization? One way is leadership. And we’d like to hear about yours.

Today, half of the respondents in Forrester’s Business Technographics® survey data report that their organizations have a chief data officer (CDO). A similar number report having a chief analytics officer (CAO). Many firms without these insights leaders report plans to appoint one in the near future. Advocates for data and analytics now have permanent voices at the table.

To better understand these leadership roles, Forrester fielded its inaugural survey on CDO/CAOs in the summer of 2017. Now we’re eager to learn how the mandates, responsibilities, and influence of data and analytics leaders and their teams have evolved in the past 18 months. Time for a new survey!

Take Forrester’s Data And Analytics Leadership Survey

Are you responsible for data and analytics initiatives at your firm? If so, we need your expertise and insights! Forrester is looking to understand:

  • Which factors drive the appointment of data and analytics leaders, as well as the creation of a dedicated team?
  • Which roles are part of a data and analytics function? How is the team organized?
  • What challenges do data and analytics functions encounter?
  • What is the working relationship between data and analytics teams and other departments?
  • What data and analytics use case, strategy, technology, people, and process support do these teams offer? How does the team prioritize data and analytics requests from stakeholders?
  • Which data providers do teams turn to for external data?
  • Which strategies do teams use to improve data and analytics literacy within the company?

Please complete our 20-minute (anonymous) Data and Analytics Leadership Survey. The results will fuel an update to the Forrester report, “Insights-Driven Businesses Appoint Data Leadership,“as well as other reports on the “data economy.”

For other research on data and analytics leadership, please also take a look at “Strategic CDOs Accelerate Insights-To-Action” and “Data Leaders Weave An Insights-Driven Corporate Fabric.”

As a thank-you, you’ll receive a courtesy copy of the initial report of the survey’s key findings.

Thanks in advance for your participation.

https://go.forrester.com/blogs/data-and-analytics-leaders-we-need-you/

Friday, 08 March 2019 16:27

Data And Analytics Leaders, We Need You!

The reports of the death of the field of business continuity have been greatly overstated. But those of us who work in it do have to raise our performance in a few critical areas.

For some time, reports predicting the imminent demise of the field of business continuity have been a staple of industry publications and gatherings.

The most prominent of these have been the manifesto and book written by David Lindstedt and Mark Armour. For an interesting summary and review of their work, check out this articleby Charlie Maclean Bristol on BC Training.

...

https://bcmmetrics.com/business-continuity-r-i-p/

Friday, 22 February 2019 15:25

Business Continuity, R.I.P.?

Weather tools help Team Rubicon respond quicker and reduce risks

By Glen Denny, President, Enterprise Solutions, Baron Critical Weather Solutions

Team Rubicon is an international disaster response nonprofit with a mission of using the skills and experiences of military veterans and first responders to rapidly provide relief to communities in need. Headquartered in Los Angeles, California, Team Rubicon has more than 80,000 volunteers around the country ready to jump into action when needed to provide immediate relief to those affected by natural disasters.

More than 80 percent of the disasters Team Rubicon responds to are weather-related, including crippling winter storms, catastrophic hurricanes, and severe weather outbreaks – like tornadoes. While always ready to serve, the organization needed better weather intelligence to help them prepare and mitigate risks. After adopting professional weather forecasting and monitoring tools, operations teams were able to pinpoint weather hazards, track storms, view forecasts, and set up custom alerts. And the intelligence they gained made a huge difference in the organization’s response to Hurricanes Florence and Michael.

Team Rubicon relies on skills and experiences of military veterans and first responders

About 75 percent of Team Rubicon volunteers are military veterans, who find that their skills in emergency medicine, small-unit leadership, and logistics are a great fit with disaster response. It also helps with their ability to hunker down in challenging environments to get the job done. A further 20 percent of volunteers are trained first responders, while the rest are volunteers from all walks of life. The group is a member of National Voluntary Organizations Active in Disaster (National VOAD), an association of organizations that mitigate and alleviate the impact of disasters.

By focusing on underserved or economically-challenged communities, Team Rubicon seeks to make the largest impact possible. According to William (“TJ”) Porter, manager of operational planning, Team Rubicon’s core mission is to help those who are often forgotten or left behind; they place a special emphasis on helping under-insured and uninsured populations.

Porter, a 13-year Air Force veteran, law enforcement officer, world traveler, and former American Red Cross worker, proudly stands by Team Rubicon’s service principles, “Our actions are characterized by the constant pursuit to prevent or alleviate human suffering and restore human dignity – we help people on their worst day.”

Weather-related disasters pose special challenges

The help Team Rubicon provides for weather-related disasters spans the gamut, from removing trees from roadways, clearing paths for service vehicles, bringing in supplies, conducting search and rescue missions (including boating rescues), dealing with flooded out homes, mucking out after a flood, mold remediation, and just about anything else needed. While Team Rubicon had greatly expanded its equipment inventory in recent years to help do these tasks, the organization lacked the deep level of weather intelligence that could help them understand and mitigate risks – and keep their teams safe from danger.

That’s where Baron comes into the story. After learning of the impressive work Team Rubicon is doing at the Virginia Emergency Management Conference, a Baron team member struck up a conversation with Team Rubicon, asking if they had a need for detailed and accurate weather data to help them plan their efforts. Team Rubicon jumped at the opportunity and Baron ultimately donated access to its Baron Threat Net product. Key features allow users to pinpoint weather hazards by location, track storms, view forecasts and set up custom alerts, including location-based pinpoint alerting and standard alerts from the National Weather Service (NWS). The web portal weather monitoring system provides street level views and the ability to layer numerous data products. Threat Net also offers a mobile companion application that gives Team Rubicon access to real-time weather monitoring on the go.

This suited Team Rubicon down to the ground. “In years past, we didn’t have a good way to monitor weather,” explains Porter. “We went onto the NWS, but our folks are not meteorologists, and they don’t have that background to make crucial decisions. Baron Threat Net helped us understand risks and mitigate the risks of serious events. It plays a crucial role in getting teams in as quickly as possible so we can help the greatest number of people.”

New weather tools help with response to major hurricanes

Baron1The new weather intelligence tools have already had a huge impact on Team Rubicon’s operations. Take the example of how access to weather data helped Team Rubicon with its massive response to Hurricane Florence. A day or so before the hurricane was due to make landfall, Dan Gallagher, Enterprise Product Manager and meteorologist at Baron Services, received a call from Team Rubicon, requesting product and meteorological support. Individual staff had been using the new Baron Threat Net weather tools to a degree since gaining access to them, but the operations team wanted more training and support in the face of what looked like a major disaster barreling towards North Carolina, South Carolina, Virginia, and West Virginia.

Gallagher, a trained meteorologist with more than 18 years of experience in meteorological research and software development, quickly hopped on a plane, arriving at Team Rubicon’s National Operations Center in Dallas. His first task was to meet operational manager Porter’s request to help them guide reconnaissance teams entering the area. They wanted to place a reconnaissance team close to the storm – but not in mortal danger. Using the weather tools, Gallagher located a spot north of Wilmington, NC between the hurricane’s eyewall and outer rain bands that could serve as a safe spot for reconnaissance.

The next morning, Gallagher provided a weather briefing to ensure that operations staff had the latest weather intelligence. “I briefed them on where the storm was, where it was heading, the dangers that could be anticipated, areas likely to be most affected, and the hazards in these areas.”

Throughout the day, Gallagher conducted a number of briefings and kept the teams up to date as Hurricane Florence slowly moved overland. He also provided video weather briefings for the reconnaissance team in their car en route to their destination.

Another crew based in Charlotte was planning the safest route for trucking in supplies based on weather conditions. They wanted help in choosing whether to haul the trailer from Atlanta, GA or Alexandria, VA. “I was not there to make a recommendation on an action but rather to give them the weather information they need to make their decision,” explains Gallagher. “As a meteorologist, I know what the weather is, but they decide how it impacts their operation. As soon as I gave a weather update they could make a decision within seconds, making it possible for actions based on that decision.” Team Rubicon used the information Gallagher provided to select the Alexandria VA route; their crackerjack logistics team was then able to quickly make all the needed logistical arrangements.

In addition to weather briefings, Gallagher provided more detailed product training on Baron Threat Net, observed how the teams actually use the product, and learned how the real-time products were performing. He also got great feedback on other data products that might enhance Team Rubicon’s ability to respond to disasters.

Team Rubicon gave very high marks to the high-resolution weather/forecast model available in Baron Threat Net. They relied upon the predictive precipitation accumulation and wind speed information, as well as information on total precipitation accumulation (what has already fallen in the past 24 hours).

The wind damage product showing shear rate was very useful to Team Rubicon. In addition, the product did an excellent job of detecting rotation, including picking out the weak tornadoes spawned from the hurricane that were present in the outer rain bands of Hurricane Florence. These are typically very difficult to identify and warn people about, because they spin up quickly and are relatively shallow and weak (with tornado damage of EF0 or EF1 as measured on the Enhanced Fujita Scale). Gallagher had seen how well the wind damage product performed in larger tornado cases but was particularly gratified at how well it helped the team detect these smaller ones.

For example, Lauren Vatier of Team Rubicon’s National Incident Management Teamcommented that she had worked with Baron Threat Net before the Florence event, but using it so intensively made her more familiar with how to use the product and really helped cement her knowledge. “Before Florence I had not used Baron Threat Net for intel purposes. Today I am looking for information on rain accumulation and wind, and I’m looking ahead to help the team understand what the situation will look like in the future. It helps me understand and verify the actual information happening with the storm. I don’t like relying on news articles. Now I can look into the product and get accurate and reliable information.”

Vatier also really likes the ability to pinpoint information on a map showing colors and ranges. “You can click on a point and tell how much accumulation has occurred or what the wind speed is. The pinpointing is a valuable part of Baron Threat Net.” The patented Baron Pinpoint Alerting technology automatically sends notifications any time impactful weather approaches; alert types include severe storms and tornadoes; proximity alerts for approaching lightning, hail, snow and rain; and National Weather Service warnings. She concludes, “I feel empowered by the program. It ups my confidence in my ability to provide accurate information.”

Baron2TJ Porter concurred that Baron Threat Net helped Team Rubicon mobilize the large teams that deployed for Hurricane Florence. “It is crucial to put people on the ground and make sure they’re safe. Baron Threat Net helps us respond quicker to disasters. It also helps the strike teams ensure they are not caught up in other secondary or rapid onset weather events.”

Porter explains that the situation unit leaders actively monitor weather through the day using Baron Threat Net. “We are giving them all the tools at our disposal, because these are the folks who provide early warnings to keep our folks safe.”

Future-proofing weather data

Being on the ground with Team Rubicon during the Hurricane Florence disaster recovery response gave Baron’s Gallagher an unusual opportunity to discuss other ways Baron weather products could help respond to weather-related disasters. According to Porter, “We are looking to Baron to help us understand secondary events, like the extensive flooding resulting from Hurricane Florence, and to understand where these hazards are today, tomorrow, and the next day.”

In addition, Team Rubicon is committed to targeting those areas of greatest need, so they want to be able to layer weather information with other data sets, especially social vulnerability, including location of areas with uninsured or underinsured populations. Says Porter, “Getting into areas we know need help will shave minutes, hours, or even days off how long it takes to be there helping”.

In the storm’s aftermath

At the time this article was written, hundreds of Team Rubicon volunteers were deployed as part of Hurricane Florence response operations and later in response to Hurricane Michael. Their work has garnered them a tremendous amount of national appreciation, including a spotlight appearance during Game 1 of the World Series. T-Mobile used its commercial television spots to support the organization, also pledging to donate $5,000 per post-season home run plus $1 per Twitter or Instagram post using #HR4HR to Team Rubicon.

Baron’s Gallagher appreciated the opportunity to see in real time how customers use its products, saying “The experience helped me frame improvements we can develop that will positively affect our clients using Baron Threat Net.”

By Alex Winokur, founder of Axxana

 

Disaster recovery is now on the list of top concerns of every CIO. In this article we review the evolution of the disaster recovery landscape, from its inception until today. We look at the current understanding of disaster behavior and as a result the disaster recovery processes. We also try to cautiously anticipate the future, outlining the main challenges associated with disaster recovery.

The Past

The computer industry is relatively young. The first commercial computers appeared somewhere in the 1950s—not even seventy years ago. The history of disaster recovery (DR) is even younger. Table 1 outlines the appearance of the various technologies necessary to construct a modern DR solution.

AxxanaTable

Table 1 – Early history of DR technology development

 

From Magnetic Tapes to Data Networks

The first magnetic tapes for computers were used as input/output devices. That is, input was punched onto punch cards that were then stored offline to magnetic tapes. Later, UNIVAC I, one of the first commercial computers, was able to read these tapes and process their data. Later still, output was similarly directed to magnetic tapes that were connected offline to printers for printing purposes. Tapes began to be used as a backup medium only after 1954, with the

Axxana1

Figure 1: First Storage System - RAMAC


Although modern wide-area communication networks date back to 1974, data has been transmitted over long-distance communication lines since 1837 via telegraphy systems. These telegraphy communications have since evolved to data transmission over telephone lines using modems.
introduction of the mass storage device (RAMAC).

Modems were massively introduced in 1958 to connect United States air defense systems; however, their throughput was very low compared to what we have today. The FAA clustered system deployed communication that was designed for computers to communicate with their peripherals (e.g., tapes). Local area networks (LANs) as we now know them had not been invented yet.

Early Attempts at Disaster Recovery

It wasn’t until the 1970s that concerns about disaster recovery started to emerge. In that decade, the deployment of IBM 360 computers reached a critical mass, and they became a vital part of almost every organization. Until the mid-1970s, the perception was that if a computer failed, it would be possible to fail back to paper-based operation as was done in the 1960s. However, the wide-spread rise of digital technologies in the 1970s led to a corresponding increase in technological failures on one hand; while on the other hand, theoretical calculations, backed by real-world evidence, showed that switching back to paper-based work was not practical.

The emergence of terrorist groups in Europe like the Red Brigades in Italy and the Baader-Meinhof Group in Germany further escalated concerns about the disruption of computer operations. These left-wing organizations specifically targeted financial institutions. The fear was that one of them would try to blow up a bank’s data centers.

At that time, communication networks were in their infancy, and replication between data centers was not practical.

Parallel workloads. IBM came up with the idea to use the FAA clustering technology to build two adjoining computer rooms that were separated by a steel wall and had one node cluster in each room. The idea was to run the same workload twice and to be able to immediately fail over from one system to the other in case one system was attacked. A closer analysis revealed that in a case of a terror attack, the only surviving object would be the steel wall, so the plan was abandoned.

Hot, warm, and cold sites. The inability of computer vendors (IBM was the main vendor at the time) to provide an adequate DR solution made way for dedicated DR firms like SunGard to provide hot, warm, or cold alternate site. Hot sites, for example, were duplicates of the primary site; they independently ran the same workloads as the primary site, as communication between the two sites was not available at the time. Cold sites served as repositories for backup tapes. Following a disaster at the primary site, operations would resume at the cold site by allocating equipment, executing a restore from backup operations, and restarting the applications. Warms sites were a compromise between a hot site and a cold site. These sites had hardware and connectivity already established; however, recovery was still done by restoring the data from backups before the applications could be restarted.

Backups and high availability. The major advances in the 1980s were around backups and high availability. On the backup side, regulations requiring banks to have a testable backup plan were enacted. These were probably the first DR regulations to be imposed on banks; many more followed through the years. On the high availability side, Digital Equipment Corporation (DEC) made the most significant advances in LAN communications (DECnet) and clustering (VAXcluster).

The Turning Point

On February 26, 1993 the first bombing of the World Trade Center (WTC) took place. This was probably the most significant event shaping the disaster recovery solution architectures of today. People realized that the existing disaster recovery solutions, which were mainly based on tape backups, were not sufficient. They understood that too much data would be lost in a real disaster event.

SRDF. By this time, communication networks had matured, and EMC became the first to introduce a storage-to-storage replication software called Symmetrix Remote Data Facility (SRDF).

 

Behind the Scenes at IBM

At the beginning of the nineties, I was with IBM’s research division. At the time, we were busy developing a very innovative solution to shorten the backup window, as backups were the foundation for all DR and the existing backup windows (dead hours during the night) started to be insufficient to complete the daily backup. The solution, called concurrent copy, was the ancestor of all snapshotting technologies, and it was the first intelligent function running within the storage subsystem. The WTC event in 1993 left IBM fighting the “yesterday battles” of developing a backup solution, while giving EMC the opportunity to introduce storage-based replication and become the leader in the storage industry.

 

The first few years of the 21st century will always be remembered for the events of September 11, 2001—the date of the complete annihilation of the World Trade Center. Government, industry, and technology leaders realized then that some disasters can affect the whole nation, and therefore DR had to be taken much more seriously. In particular, the attack demonstrated that existing DR plans were not adequate to cope with disasters of such magnitude. The notion of local, regional, and nationwide disasters crystalized, and it was realized that recovery methods that work for local disasters don’t necessarily work for regional ones.

SEC directives. In response, the Securities Exchange Commission (SEC) issued a set of very specific directives in the form of the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S.” These regulations, still intact today, bind all financial institutions. The DR practices that were codified in the SEC regulations quickly propagated to other sectors, and disaster recovery became a major area of activity for all organizations relying on IT infrastructure.

The essence of these regulations is as follows:

  1. The economic stance of the United States cannot be compromised under any circumstance.
  2. Relevant financial institutions are obliged to correctly, without any data loss, resume operations by the next business day following a disaster.
  3. Alternate disaster recovery sites must use different physical infrastructure (electricity, communication, water, transportation, and so on) than the primary site.

Note that Requirements 2 and 3 above are somewhat contradictory. Requirement 2 necessitates synchronous replication to facilitate zero data loss, while Requirement 3 basically dictates long distances between sites—thereby making the use of synchronous replication impossible. This contradiction is not addressed within the regulations and is left to each implementer to deal with at its own discretion.

The secret to resolving this contradiction lies in the ability to reconstruct missing data if or when data loss occurs. The nature of most critical data is such that there is always at least one other instance of this data somewhere in the universe. The trick is to locate it, determine how much of it is missing in the database, and augment the surviving instance of the database with this data. This process is called data reconciliation, and it has become a critical component of modern disaster recovery. [See The Data Reconciliation Process sidebar.]

 

The Data Reconciliation Process

If data is lost as a result of a disaster, the database becomes misaligned with the real world. The longer this misalignment exists, the greater the risk of application inconsistencies and operational disruptions. Therefore, following a disaster, it is very important to align back the databases with the real world as soon as possible. This process of alignment is called data reconciliation.

The reconciliation process has two important characteristics:

  1. It is based on the fact that the data lost in a disaster exists somewhere in the real word, and thus it can be reconstructed in the database.
  2. The duration and complexity of the reconciliation is proportional to the recovery point objective (RPO); that is, it’s proportional to the amount of data lost.

One of the most common misconceptions in disaster recovery is that RPO (for example, RPO = 5) refers to how many minutes of data the organization is willing to lose. What RPO really means is that the organization must be able to reconstruct and reconsolidate (i.e., reconcile) that last five minutes of missing data. Note that the higher the RPO (and therefore, the greater the data loss), the longer the RTO and the costlier the reconciliation process. Catastrophes typically occur when RPO is compromised and the reconciliation process takes much longer.

In most cases, the reconciliation process is quite complicated, consisting of time-consuming processes to identify the data gaps and then resubmitting the missing transactions to realign the databases with real-world status. This is a costly, mainly manual, error-prone process that greatly prolongs the recovery time of the systems and magnifies risks associated with downtime.

 

The Present

The second decade of the 21st century has been characterized by new types of disaster threats, including sophisticated cyberattacks and extreme weather hazards caused by global warming. It is also characterized by new DR paradigms, like DR automation, disaster recovery as a service (DRaaS), and active-active configurations.

These new technologies are for the most part still in their infancy. DR automation tools attempt to orchestrate a complete site recovery through invocation of one “site failover” command, but they are still very limited in scope. A typical tool in this category is the VMware Site Recovery Manager (SRM). DRaaS attempts to reduce the cost of DR-compliant installation by locating the secondary site in the cloud. The new active-active configurations try to reduce equipment costs and recovery time by utilizing techniques that are used in the context of high availability; that is, to recover from a component failure rather than a complete site failure.

Disasters vs. Catastrophes

The following definitions of disasters and disaster recovery have been refined over the years to make a clear distinction between the two main aspects of business continuity: high availability protection and disaster recovery. This distinction is important because it crystalizes the difference between disaster recovery and a single component failure recovery covered by highly available configurations, and in doing so also accounts for the limitations of using active-active solutions for DR.

A disaster in the context of IT is either a significant adverse event that causes an inability to continue operation of the data center or a data loss event where recovery cannot be based on equipment at the data center. In essence, disaster recovery is a set of procedures aimed to resume operations following a disaster by failing over to a secondary site.

From a DR procedures perspective, it is customary to classify disasters into 1) regional disasters like weather hazards, earthquakes, floods, and electricity blackouts and 2) local disasters like local fires, onsite electrical failures, and cooling system failures.

Over the years, I have also noticed a third, independent classification of disasters. Disasters can also be classified as catastrophes. In principal, a catastrophe is a disastrous event where in the course of a disaster, something very unexpected happens that causes the disaster recovery plans to dramatically miss their service level agreement (SLA); that is, they typically exceed their recovery time objective (RTO).

When DR procedures go as planned for regional and local disasters, organizations fail over to a secondary site and resume operations within pre-determined parameters for recovery time (i.e., RTO) and data loss (i.e., RPO). The organization’s SLAs, business continuity plans, and risk management goals align with these objectives, and the organization is prepared to accept the consequent outcomes. A catastrophe occurs when these SLAs are compromised.

Catastrophes can also result from simply failing to execute the DR procedures as specified, typically due to human errors. However, for the sake of this article, let’s be optimistic and assume that DR plans are always executed flawlessly. We shall concentrate only on unexpected events that are beyond human control.

Most of the disaster events that have been reported in the news recently (for example, the Amazon Prime Day outage in July 2018 and the British Airways bank holiday outage in 2017) have been catastrophes related to local disasters. If DR could have been properly applied to the disruptions at hand, nobody would have noticed that there had been a problem, as the DR procedures were designed to provide almost zero recovery time and hence zero down time.

The following two examples provide a closer look at how catastrophes occur.

9/11 – Following the September 11 attack, several banks experienced major outages. Most of them had a fully equipped alternate site in Jersey City—no more than five miles away from their primary site. However, the failover failed miserably because the banks’ DR plans called for critical personnel to travel from their primary site to their alternate site, but nobody could get out of Manhattan.

A data center power failure during a major snow storm in New England – Under normal DR operations at this organization, the data was synchronously replicated to an alternate site. However, 90 seconds prior to a power failure at the primary site, the central communication switch in the area lost power too, which cut all WAN communications. As a result, the primary site continued to produce data for 90 seconds without replication to the secondary site; that is, until it experienced the power failure. When it finally failed over to the alternate site, 90 seconds of transactions were missing; and because the DR procedures were not designed to address recovery where data loss has occurred, the organization experienced catastrophic down time.

The common theme of these two examples is that in addition to the disaster at the data center there was some additional—unrelated—malfunction that turned a “normal” disaster into a catastrophe. In the first case, it was a transportation failure; in the second case, it was a central switch failure. Interestingly, both failures occurred to infrastructure elements that were completely outside the control of the organizations that experienced the catastrophe. Failure of the surrounding infrastructure is indeed one of the major causes for catastrophes. This is also the reason why the SEC regulations put so much emphasis on infrastructure separation between the primary and secondary data center.

Current DR Configurations

In this section, I’ve included examples of two traditional DR configurations that separate the primary and secondary center, as stipulated by the SEC. These configurations have predominated in the past decade or so, but they cannot ensure zero data loss in rolling disasters and other disaster scenarios, and they are being challenged by new paradigms such as that introduced by Axxana’s Phoenix. While a detailed discussion would be outside the scope of this article, suffice it to say that Axxana’s Phoenix makes it possible to avoid catastrophes such as those just described—something that is not possible with traditional synchronous replication models.

AxxanaFig2

Figure 2 – Typical DR configuration

 

Typical DR configuration. Figure 2 presents a typical disaster recovery configuration. It consists of a primary site, a remote site, and another set of equipment at the primary site, which serves as a local standby.

The main goal of the local standby installation is to provide redundancy to the production equipment at the primary site. The standby equipment is designed to provide nearly seamless failover capabilities in case of an equipment failure—not in a disaster scenario. The remote site is typically located at a distance that guarantees infrastructure independence (communication, power, water, transportation, etc.) to minimize the chances of a catastrophe. It should be noted that the typical DR configuration is very wasteful. Essentially, an organization has to triple the cost of equipment and software licenses—not to mention the increased personnel costs and the cost of high-bandwidth communications—to support the configuration of Figure 2.

AxxanaFig2

Figure 3 – DR cost-saving configuration

 

Traditional ideal DR configuration. Figure 3 illustrates the traditional ideal DR configuration. Here, the remote site serves both for DR purposes and high availability purposes. Such configurations are sometimes realized in the form of extended clusters like Oracle RAC One Node on Extended Distance. Although traditionally considered the ideal, they are a trade-off between survivability, performance, and cost. The organization saves on the cost of one set of equipment and licenses, but it compromises survivability and performance. That’s because the two sites have to be in close proximity to share the same infrastructure, so they are more likely to both be affected by the same regional disasters; at the same time, performance is compromised due to the increased latency caused by separating the two cluster nodes from each other.

AxxanaFig4

Figure 4 – Consolidation of DR and high availability configurations with Axxana’s Phoenix


True zero-data-loss configuration. Figure 4 represents a cost-saving solution with Axxana’s Phoenix. In case of a disaster, Axxana’s Phoenix provides a zero-data-loss recovery to any distance. So, with the help of Oracle’s high availability support (fast start failover and transparent application failover), Phoenix provides functionality very similar to extended cluster functionality. With Phoenix, however, it can be implemented over much longer distances and with much smaller latency, providing true cost savings over the typical configuration shown in Figure 3.

The Future

In my view, the future is going to be a constant race between new threats and new disaster recovery technologies.

New Threats and Challenges

In terms of threats, global warming creates new weather hazards that are fiercer, more frequent, and far more damaging than in the past—and in areas that have not previously experienced such events. Terror attacks are on the rise, thereby increasing threats to national infrastructures (potential regional disasters). Cyberattacks—in particular ransomware, which destroys data—are a new type of disaster. They are becoming more prolific, more sophisticated and targeted, and more damaging.

At the same time, data center operations are becoming more and more complex. Data is growing exponentially. Instead of getting simpler and more robust, infrastructures are getting more diversified and fragmented. In addition to legacy architectures that aren’t likely to be replaced for a number of years to come, new paradigms like public, hybrid, and private clouds; hyperconverged systems; and software-defined storage are being introduced. Adding to that are an increasing scarcity of qualified IT workers and economic pressures that limit IT spending. All combined, these factors contribute to data center vulnerabilities and to more frequent events requiring disaster recovery.

So, this is on the threat side. What is there for us on the technology side?

New Technologies

Of course, Axxana’s Phoenix is at the forefront of new technologies that guarantee zero data loss in any DR configuration (and therefore ensure rapid recovery), but I will leave the details of our solution to a different discussion.

AI and machine learning. Apart from Axxana’s Phoenix, the most promising technologies on the horizon revolve around artificial intelligence (AI) and machine learning. These technologies enable DR processes to become more “intelligent,” efficient, and predictive by using data from DR tests, real-world DR operations, and past disaster scenarios; in doing so, disaster recovery processes can be designed to better anticipate and respond to unexpected catastrophic events.These technologies, if correctly applied, can shorten RTO and significantly increase the success rate of disaster recovery operations. The following examples suggest only a few of their potential applications in various phases of disaster recovery:

  • They can be applied to improve the DR planning stage, resulting in more robust DR procedures.
  • When a disaster occurs, they can assist in the assessment phase to provide faster and better decision-making regarding failover operations.
  • They can significantly improve the failover process itself, monitoring its progress and automatically invoking corrective actions if something goes wrong.

When these technologies mature, the entire DR cycle from planning to execution can be fully automated. They carry the promise of much better outcomes than processes done by humans because they can process and better “comprehend” far more data in very complex environments with hundreds of components and thousands of different failure sequences and disaster scenarios.

New models of protection against cyberattacks. The second front where technology can greatly help with disaster recovery is on the cyberattack front. Right now, organizations are spending millions of dollars on various intrusion prevention, intrusion detection, and asset protection tools. The evolution should be from protecting individual organizations to protecting the global network. Instead of fragmented, per-organization defense measures, the global communication network should be “cleaned” of threats that can create data center disasters. So, for example, phishing attacks that would compromise a data center’s access control mechanisms should be filtered out in the network—or in the cloud— instead of reaching and being filtered at the end points.

Conclusion

Disaster recovery has come a long way—from naive tape backup operations to complex site recovery operations and data reconciliation techniques. The expenses associated with disaster protection don’t seem to go down over the years; on the contrary, they are only increasing.

The major challenge of DR readiness is in its return on investment (ROI) model. On one hand, a traditional zero-data-loss DR configuration requires organizations to implement and manage not only a primary site, but also a local standby and remote standby; doing so essentially triples the costs of critical infrastructure, even though only one third of it (the primary site) is utilized in normal operation.

On the other hand, if a disaster occurs and the proper measures are not in place, the financial losses, reputation damage, regulatory backlash, and other risks can be devastating. As organizations move into the future, they will need to address the increasing volumes and criticality of data. The right disaster recovery solution will no longer be an option; it will be essential for mitigating risk, and ultimately, for staying in business.

Thursday, 07 February 2019 18:15

Disaster Recovery: Past, Present, and Future

What Recent News Means for the Future

The compliance landscape is changing, necessitating changes from the compliance profession as well. A team of experts from CyberSaint discuss what compliance practitioners can expect in the year ahead.

Regardless of experience or background, 2019 will not be an easy year for information security. In fact, we realize it’s only going to get more complicated. However, what we are excited to see is the awareness that the breaches of 2018 have brought to information security – how more and more senior executives are realizing that information security needs to be treated as a true business function – and 2019 will only see more of that.

Regulatory Landscape

As constituents become more technology literate, we will start to see regulatory bodies ramping up security compliance enforcement for the public and private sectors. Along with the expansion of existing regulations, we will also see new cyber regulations come into fruition. While we may not see U.S. regulations similar to GDPR on a federal level in 2019, these conversations around privacy regulation will only become more notable. What we are seeing already is the expansion of the DFARS mandate to encompass all aspects of the federal government, going beyond the Department of Defense.

...

https://www.corporatecomplianceinsights.com/a-cybersecurity-compliance-crystal-ball-for-2019/

Today Forrester closed the deal to acquire SiriusDecisions.  

SiriusDecisions helps business-to-business companies align the functions of sales, marketing, and product management; Sirius clients grow 19% faster and are 15% more profitable than their peers. Leaders within these companies make more informed business decisions through access to industry analysts, research, benchmark data, peer networks, events, and continuous learning courses, while their companies run the “Sirius Way” based on proven, industry-leading models and frameworks.  Forrester Acquisition of SiriusDecisions

Why Forrester and SiriusDecisions? Forrester provides the strategy needed to be successful in the age of the customer; SiriusDecisions provides the operational excellence. The combined unique value can be summarized in a simple statement:

We work with business and technology leaders to develop customer-obsessed strategies and operations that drive growth. 

...

https://go.forrester.com/blogs/forrester-siriusdecisions/

Thursday, 03 January 2019 15:49

Forrester + SiriusDecisions

By Alex Becker, vice president and general manager of Cloud Solutions, Arcserve

If you’re like most IT professionals, your worst nightmare is waking up to the harsh reality that one of your primary systems or applications has crashed and you’ve experienced data loss. Whether caused by fire, flood, earthquake, cyber attack, programming glitch, hardware failure, human error, whatever – this is generally the moment that panic sets in.

While most IT teams understand unplanned downtime is a question of when, not if, many wouldn’t be able to recover business-critical data in time to avoid a disruption in business. According to new survey research commissioned by Arcserve of 759 global IT decision-makers, half revealed they have less than an hour to recover business-critical data before it starts impacting revenue, yet only a quarter cite being extremely confident in their ability to do so. The obvious question is why.

UNTANGLING THE KNOT OF 21ST CENTURY IT

Navigating modern IT can seem like stumbling through a maze. Infrastructures are rapidly transforming, spreading across different platforms, vendors and locations, but still often include non-x86 platforms to support legacy applications. With these multi-generational IT environments, businesses face increased risk of data loss and extended downtime caused by gaps in the labyrinth of primary and secondary data centers, cloud workloads, operating environments, disaster recovery (DR) plans and colocation facilities.

Yet, despite the complex nature of today’s environments, over half of companies resort to using two or more backup solutions, further adding to the complexity they’re attempting to solve. Never mind delivering on service level agreements (SLAs) or, in many cases, protecting data beyond mission-critical systems and applications.

It seems modern disaster recovery has become more about keeping the lights on than proactively avoiding the impacts of disaster. Because of this, many organizations develop DR plans to recover as quickly as possible during an outage. But, there’s just one problem: when was their most recent backup?  

WOULD YOU EAT DAY-OLD SUSHI?        

Day-old sushi is your backup. That’s right, if you’ve left your California Roll sitting out all night, chances are it’s the same age as your data if you do daily backups. One will cause a nasty bout of food poisoning and the other a massive loss of business data. Horrified or just extremely nauseated?

You may be thinking this is a bit dramatic, but if your last backup was yesterday, you’re essentially willing to accept more than 24 hours of lost business activity. For most companies, losing transactional information for this length of time would wreak havoc on their business. And, if those backups are corrupted, the ability to recover quickly becomes irrelevant.

While the answer to this challenge may seem obvious (backup more frequently), it’s far from simple. We must remember that in the quest to architect a simple DR plan, many organizations make the one wrong move that becomes their downfall: they use too many solutions, often trying to overcompensate for capabilities offered in one but not the others.

The other, and arguably more alarming reason, is a general lack of understanding about what’s truly viable with any given vendor. While many solutions today can get your organization back online in minutes, the key is minimizing the amount of business activity lost during an unplanned outage. It’s this factor that can easily be overlooked, and one that most solutions cannot deliver.

WHEN A BLIP TURNS BRUTAL

Imagine, for a moment, you have a power failure that brings down your systems and one of two scenarios plays out. In the first, you’re confident you can recover quickly, spinning up your primary application in minutes only to realize the data you’re restoring is hours - or even days old. Your manager is frantic and your sales team is furious as they stand by and watch every order from the past day go missing. In the second scenario, you’re confident you can recover quickly and spin up your primary application in minutes. This time, however, with data that was synced just a few seconds or minutes ago. This is the difference between a blip on the radar of your internal and external customers, and potentially hundreds of thousands (or more) in lost revenue, not to mention damage to you and your organization’s reputation which is right up there with financial loss.

For a variety of reasons ranging from perceived cost and complexity to limited network bandwidth and resistance to change, many shy away from deploying DR solutions that could very well enable them to avoid IT disasters. However, leveraging a solution that can keep your “blip” from turning brutal is easily the best kept secret of a DR strategy that works, and one that simply doesn’t.

ASK THESE 10 QUESTIONS TO MAKE SURE YOUR DR SOLUTION ISN’T TRICKING YOU

Many IT leaders agree that the volume of data lost during downtime (your recovery point objective, or RPO) is equally, if not more important than the time it takes to restore (your recovery time objective, or RTO). The trick is wading through the countless solutions that promise 100 percent uptime, but fall short in supporting stringent RPOs for critical systems and applications. These questions can help you evaluate whether your solution will make the cut or leave you in the cold:

  1. Does the solution include on-premises (for quick recovery of one or a few systems), remote (for critical systems at remote locations), private cloud you have already invested in, public cloud (Amazon/Azure) and purpose-built vendor cloud options? Your needs may vary and the solution should offer broad options to fit your infrastructure and business requirements.
  2. How many vendors would be involved in your end-to-end DR solution, including software, hardware, networking, cloud services, DR hypervisors and high availability? How many user interfaces would that entail? The patchwork-based solution from numerous vendors may increase complexity, time to manage and internal costs – and more importantly it will increase risks of bouncing between vendors if something goes wrong.
  3. Does the solution provide support and recovery for all generations of IT platforms, including non-x86, x86, physical, virtual and cloud instances running Windows and/or Linux?
  4. Does the solution offer both direct-to-cloud and hybrid cloud options? This ensures you can address any business requirement and truly safeguard your IT transformation.
  5. Does the solution deliver sub five-minute, rapid push-button failover? This allows you to continue accessing business-critical applications during a downtime event, as well as power on / run your environment with the click of a button.
  6. Does it support both rapid failover (RTOs) and RPOs of minutes, regardless of network complexity? When interruption happens, it’s vital that you can access business-critical applications with minimal disruption and effectively protect these systems by supporting RPOs of minutes.
  7. Does the solution provide automated incremental failback to bring back all applications and databases in their most current state to your on-premises environment?
  8. Does your solution leverage image-based technology to ensure no important data or configuration is left behind?
  9. Is your solution optimized for low bandwidth locations, being capable of moving large volumes of data to and from the cloud without draining bandwidth?
  10. In the event of a disaster, does the solution give you options for network connectivity, such as point to site VPN, site to site VPN and site to site VPN with IP takeover?

The true value you provide your organization and your customers is the peace of mind and viability of their business when a disaster or downtime event occurs. And even when its business as usual, you’ll be able to support a range of needs - such as migrating workloads to a public or private cloud, advanced hypervisor protection, and support of sub-minute RTOs and RPOs - across every IT platform, from UNIX and x86 to public and private clouds.

By keeping these questions in mind, you’ll be better prepared to challenge vendor promises that often cannot be delivered and to select the right solution to safeguard your entire IT infrastructure - when disaster strikes and when it doesn’t. No more day old sushi. No more secrets.

About the Author

As VP and GM of Arcserve Cloud Solutions, Alex Becker leads the company’s cloud and north american sales teams. Before joining Arcserve in April 2018, Alex served in various sales and leadership positions at ClickSoftware, Digital River, Fujitsu Consulting, and PTC.

Ah, Florida. Home to sun-washed beaches, Kennedy Space Center, the woeful Marlins – and one of the most costly tort systems in the country.

A significant driver of these costs is Florida’s “assignment of benefits crisis.”

Today the I.I.I. published a report documenting what the crisis is, how it’s spreading and how it’s costing Florida consumers billions of dollars. You can download and read the full report, “Florida’s assignment of benefits crisis: runaway litigation is spreading, and consumers are paying the price,” here.

An assignment of benefits (AOB) is a contract that allows a third party – a contractor, a medical provider, an auto repair shop – to bill an insurance company directly for repairs or other services done for the policyholder.

...

http://www.iii.org/insuranceindustryblog/study-florida-assignment-benefits-crisis-is-spreading-and-is-costing-consumers-billions-dollars/

Supply chain cartoon

It’s in your company’s best interest not to overlook disaster recovery (DR). If you’re hit with a cyberattack, natural disaster, power outage or any sort of other unplanned disturbance that could potentially threaten your business  ̶  you’ll be happy you had a DR plan in place.

It’s important to remember that your business is made up of a lot of moving parts, some of which may reside outside your building and under the control of others. And just because you have the foresight to prepare for the worst doesn’t mean the companies in your supply chain will also take the same precautions.

Verify that all participants within your supply chain have DR and business continuity plans in place, and that these plans are routinely tested and communicated to employees to ensure they can hold up their end of the supply chain in the event of a disaster. If you don’t, the wheels might just fall off your DR plan.

Check out more IT cartoons.

Free cloud storage is one of the best online storage deals – the price is right. 

Free cloud backup provides a convenient way to share content with friends, family and colleagues. Small businesses and individuals can take advantage of free online file storage to access extra space, for backup and recovery purposes or just store files temporarily.

Free cloud storage also tends to have paid options that are priced for individuals, small businesses, and large enterprises – so they will grow with you. The cloud storage pricing can vary considerably for these options.

The following are the best free cloud backup, with the associated advanced cloud storage options:

(Hint: some businesses have discovered that the most free cloud storage results from combining free cloud services:)

...

http://www.enterprisestorageforum.com/cloud-storage/best-free-cloud-storage-providers.html

Thursday, 15 November 2018 17:11

6 Best Free Cloud Storage Providers

These are the five major developments Jerry Melnick, president and CEO, SIOS Technology, sees in cloud, High Availability and IT service management, DevOps, and IT operations analytics and AI in 2019:

 

1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

Advances in technology will make the cloud substantially more suitable for critical applications. With IT staff now becoming more comfortable in the cloud, their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever failover clustering technology they currently use in their datacenters to protect the critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations the cloud. At the same time, cloud service providers will continue to advance their service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

Dynamic utilization of the cloud’s vast resources will enable IT to more effectively manage and orchestrate the services needed to support mission-critical applications. With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internetworking, now enables standby resources to be allocated dynamically only when needed, which will dramatically lower the cost of provisioning high availability and disaster recovery protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

Given its mission-critical nature, IT departments have historically chosen to implement SAP and SAP S4/HANA in enterprise datacenters, where the staff enjoys full control over the environment. As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud “Quick-start” Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates will become the standard for complex software and service deployments in private, public and hybrid clouds. These templates are wizard-based interfaces that employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy, and the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics and Artificial Intelligence Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will continue becoming more highly focused and purpose-built for specific needs, and these capabilities will increasingly be embedded in management tools. This much-anticipated capability will simplify IT operations, improve infrastructure and application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in high availability and disaster recovery solutions, as well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve. 

A COMSAT Perspective

Comsat1We’ve seen such happen all too often – large populations devastated by natural disasters through events such as earthquakes, tsunamis, fires and extreme weather. As we’ve witnessed in the past, devastation isn’t limited to natural occurrences, it can also be man-made. Whatever the event may be, natural or man-made, first responders and relief teams depend on reliable communication to provide those most affected the help they need. Dependable satellite communication (SATCOM) technology is the difference between life and death, expedient care or delay.

Devastation can occur in the business community, as well. For businesses and government entities that depend on the Internet of Things (IoT), as most do, organizations can face tremendous loss without a communication, or continuity, plan.

How do we stay constantly connected by land, sea or air, in vulnerable situations? Today’s teleport SATCOM technology provides reliable and affordable operational resiliency that is scalable and cost effective for anyone that depends on connectivity, including IOT.

Independent of the vulnerabilities of terrestrial land lines, today’s modern teleports provide a variety of voice and data options that include offsite data warehousing, Virtual Machine (M2M) access, and a secure, reliable connection to private networks and the World Wide Web.

Manufacturing, energy, transportation, retail, healthcare, financial services, smart cities, government and education are all closing the digital divide and becoming more and more dependent on connectivity to conduct business. They all require disaster recovery systems and reliable communications that only satellite communications can provide when land circuits are disrupted.

COMSAT, a Satcom Direct (SD) company, with the SD Data Center, has been working to provide secure, comprehensive, integrated connectivity solutions to help organizations stay connected, no matter the environment or circumstances. COMSAT’s teleports, a critical component in this process, have evolved to keep pace with changing communication needs in any situation.

Comsat2“In the past, customers would come to COMSAT to connect equipment at multiple locations via satellite using our teleports. Today, the teleports do so much more. They act as a network node, data center, meet-me point and customer support center. They are no longer a place where satellite engineers focus on antennas, RF, baseband and facilities. Today’s teleports are now an extension of the customer’s business ensuring they are securely connected when needed,” said Chris Faletra, director of teleport sales.

Comsat3COMSAT owns and operates two commercial teleport facilities in the United States. The Southbury teleport is located on the east coast, about 60 miles north of New York City. The Santa Paula teleport is located 90 miles north of Los Angeles on the west coast.

Each teleport has operated continuously for more than 40 years, since 1976. The teleports were built to high standards for providing life and safety services, along with a host of satellite system platforms from metrological data gathering to advanced navigation systems. As such, they are secure facilities connected to multiple terrestrial fiber networks and act as backup for each other through both terrestrial and satellite transmission pathways.

Both facilities are data centers equipped with advanced satellite antennas and equipment backed up with automated and redundant electrical power sources, redundant HVAC systems, automatic fire detection and suppression systems, security systems and 24/7/365 network operations centers. The teleports are critical links in delivering the complete connectivity chain.

“Our teleport facilities allow us to deliver global satellite connectivity. The teleports provide the link between the satellite constellation and terrestrial networks for reliable end-to-end connectivity at the highest service levels,” said Kevin West, chief commercial officer.

COMSAT was originally created by the Communications Satellite Act of 1962 and incorporated as a publicly traded company in 1963 with the initial purpose to serve as a public, federally funded corporation intended to develop a commercial and international satellite communications system.

For the past five decades, COMSAT has played an integral role in the growth and advancement of the industry, including being a founding member of Intelsat, operating the Marisat fleet network, and founding the initial operating system of Inmarsat from its two Earth stations.

While the teleports have been in operation for more than 40 years, the technology is continuously upgraded and enhanced to proactively support communication needs. For many years, the teleports provided point-topoint connectivity for voice and low-rate data.

Now data rates are being pushed to 0.5 Gbps with thousands of remotes on the network. The teleports also often serve as the Internet service provider (ISP). They have their own diverse fiber infrastructure to deliver gigabytes of connectivity versus the megabytes of connectivity that were required not so long ago.

All in the Family

In addition to growing the teleport’s capabilities through technological advancements, COMSAT is now a part of the SD family of companies, which further expands its offerings.

SD Land and Mobile, a division of Satcom Direct, offers a wide variety of satellite phone, mobile satellite Internet units and fixed satellite Internet units. SD Land and Mobile ensures SATCOM connectivity is available no matter how remote the location or how limited the cellular and data network coverage may be.

Data security is a critically important subject today. The SD Data Center, a wholly-owned data center by Satcom Direct, brings enterprise-level security capabilities to data transmissions in the air, on the ground and over water. The SD Data Center also provides industry compliant data center solutions, and business continuity planning for numerous industries including healthcare, education, financial, military, government and technology.

“Together, we deliver the infrastructure, products and data security necessary to keep you connected under any circumstance. We have a complete suite of solutions and capabilities for our clients,” said Rob Hill, business development.

Keeping Up with Market Needs and Trends

COMSAT’s pioneering spirit is reflected in the company’s ongoing analysis of, and adjustment to, current market needs and trends. The aero market is currently the largest growing market with new services and higher data rates being offered almost daily. Maritime, mobility and government markets are thriving as well.

No matter what direction the market is headed, COMSAT’s teleports and the SD family of companies will be ready to help clients weather the storm. comsat.com

To learn about SD Land & Mobile, head over to satcomstore.com

For additional information regarding the SD Data Center, access sddatacenter.com

COMSAT’s provision of teleport services are managed by Guy White, director of US teleports. As station director for COMSAT’s Southbury, Connecticut, and Santa Paula, California, teleports, Mr. White is responsible for the day-to-day operations and engineering of both facilities, including program planning, budget control, task scheduling, priority management, personnel matters, maintenance contract control, and other tasks related to teleport operations.

Mr. White began his career in the SATCOM industry in 1980 as a technician at the Southbury facility. Since then, he successively held the positions of senior technician, lead technician, maintenance technician and customer service engineer at Southbury, until he assumed the position of operations manager in 1992 at COMSAT’s global headquarters in Washington D.C. He returned to Southbury as station engineer in 1995 and has served as station director of the Southbury teleport since May of 2000. Mr. White’s responsibilities expanded to include the Santa Paula teleport in May of 2008.

Increase your business continuity (BC) knowledge and expertise by checking out this list of an even dozen top BC resources.

Business continuity is a sprawling, fast-changing, and challenging field. Fortunately, there are a lot of great resources out there that can help you in your drive to improve your knowledge and protect your organization.

In today’s post, I round up a “dynamic dozen” resources that you should be aware of in your role as a business continuity professional.

Some of these might be old friends and others might be new to you. In any case, you might find it beneficial to review the websites and other resources on this list as you update your strategies, perform risk assessments, and identify where to focus your future efforts.

Read on to become a master of disaster. And remember that the most important resource in any BC program is capable, knowledgeable, and well-educated people.

...

https://bcmmetrics.com/key-bc-resources/

By Cassius Rhue, Director of Engineering at SIOS Technology

All public cloud service providers offer some form of guarantee regarding availability, and these may or may not be sufficient, depending on each application’s requirement for uptime. These guarantees typically range from 95.00% to 99.99% of uptime during the month, and most impose some type of “penalty” on the service provider for falling short of those thresholds.

Most cloud service providers offer a 99.00% uptime threshold, which equates to about seven hours of downtime per month. And for many applications, those two-9’s might be enough. But for mission-critical applications, more 9’s are needed, especially given the fact that many common causes of downtime are excluded from the guarantee.

There are, of course, cost-effective ways to achieve five-9’s high availability and robust disaster recovery protection in configurations using public cloud services, either exclusively or as part of a hybrid arrangement. This article highlights limitations involving HA and DR provisions in the public cloud, explores three options for overcoming these limitations, and describes two common configurations for failover clusters.

Caveat Emptor in the Cloud

While all cloud service providers (CSPs) define “downtime” or “unavailable” somewhat differently, these definitions include only a limited set of all possible causes of failures at the application level. Generally included are failures affecting a zone or region, or external connectivity. All CSPs also offer credits ranging from 10% for failing to meet four-9’s of uptime to around 25% for failing to meet two-9’s of uptime.

Redundant resources can be configured to span the zones and/or regions within the CSP’s infrastructure, and that will help to improve application-level availability. But even with such redundancy, there remain some limitations that are often unacceptable for mission-critical applications, especially those requiring high transactional throughput performance. These limitations include each master being able to create only a single failover replica, requiring the use of the master dataset for backups, and using event logs to replicate data. These and other limitations can increase recovery time during a failure and make it necessary to schedule at least some planned downtime.

The more significant limitations involve the many exclusions to what constitutes downtime. Here are just a few examples from actual CSP service level agreements of what is excluded from “downtime” or “unavailability” that cause application-level failures resulting from:

  • factors beyond the CSP’s reasonable control (in other words, some of the stuff that happens regularly, such as carrier network outages and natural disasters)
  • the customer’s software, or third-party software or technology, including application software
  • faulty input or instructions, or any lack of action when required (in other words, the inevitable mistakes caused by human fallibility)
  • problems with individual instances or volumes not attributable to specific circumstances of “unavailability”
  • any hardware or software maintenance as provided for pursuant to the agreement

 

To be sure, it is reasonable for CSPs to exclude certain causes of failure. But it would be irresponsible for system administrators to use these as excuses, making it necessary to ensure application-level availability by some other means.

Three Options for Improving Application-level Availability

Provisioning resources for high availability in a way that does not sacrifice security or performance has never been a trivial endeavor. The challenge is especially difficult in a hybrid cloud environment where the private and public cloud infrastructures can differ significantly, which makes configurations difficult to test and maintain, and can result in failover provisions failing when actually needed.

For applications where the service levels offered by the CSP fall short, there are three additional options available based on the application itself, features in the operating system, or through the use of purpose-built failover clustering software.

The HA/DR options that might appear to be the easiest to implement are those specifically designed for each application. A good example is Microsoft’s SQL Server database with its carrier-class Always On Availability Groups feature. There are two disadvantages to this approach, however. The higher licensing fees, in this case for the Enterprise Edition, can make it prohibitively expensive for many needs. The more troubling disadvantage is the need for different HA/DR provisions for different applications, which makes ongoing management a constant (and costly) struggle.

The second option involves using uptime-related features integrated into the operating system. Windows Server Failover Clustering, for example, is a powerful and proven feature that is built into the OS. But on its own, WSFC might not provide a complete HA/DR solution because it lacks a data replication feature. In a private cloud, data replication can be provided using some form of shared storage, such as a storage area network. But because shared storage is not available in public clouds, implementing robust data replication requires using separate commercial or custom-developed software.

For Linux, which lacks a feature like WSFC, the need for additional HA/DR provisions and/or custom development is considerably greater. Using open source software like Pacemaker and Corosync requires creating (and testing) custom scripts for each application, and these scripts often need to be updated and retested after even minor changes are made to any of the software or hardware being used. But because getting the full HA stack to work well for every application can be extraordinarily difficult, only very large organizations have the wherewithal needed to even consider taking on the effort.

Ideally there would be a “universal” approach to HA/DR capable of working cost-effectively for all applications running on either Windows or Linux across public, private and hybrid clouds. Among the most versatile and affordable of such solutions is the third option: the purpose-built failover cluster. These HA/DR solutions are implemented entirely in software that is designed specifically to create, as their designation implies, a cluster of virtual or physical servers and data storage with failover from the active or primary instance to a standby to assure high availability at the application level.

These solutions provide, at a minimum, a combination of real-time data replication, continuous application monitoring and configurable failover/failback recovery policies. Some of the more robust ones offer additional advanced capabilities, such as a choice of block-level synchronous or asynchronous replication, support for Failover Cluster Instances (FCIs) in the less expensive Standard Edition of SQL Server, WAN optimization for enhanced performance and minimal bandwidth utilization, and manual switchover of primary and secondary server assignments to facilitate planned maintenance.

Although these general-purpose solutions are generally storage-agnostic, enabling them to work with storage area networks, shared-nothing SANless failover clusters are normally preferred based on their ability to eliminate potential single points of failure.

Two Common Failover Clustering Configurations

Every failover cluster consists of two or more nodes, and locating at least one of the nodes in a different datacenter is necessary to protect against local disasters. Presented here are two popular configurations: one for disaster recovery purposes; the other for providing both mission-critical high availability and disaster recovery. Because high transactional performance is often a requirement for highly available configurations, the example application is a database.

The basic SANless failover cluster for disaster recovery has two nodes with one primary and one secondary or standby server or server instance. This minimal configuration also requires a third node or instance to function as a witness, which is needed to achieve a quorum for determining assignment of the primary. For database applications, replication to the standby instance across the WAN is asynchronous to maintain high performance in the primary instance.

The SANless failover cluster affords a rapid recovery in the event of a failure in the primary, making this basic DR configuration suitable for many applications. And because it is capable of detecting virtually all possible failures, including those not counted as downtime in public cloud services, it will work in a private, public or hybrid cloud environment.

For example, the primary could be in the enterprise datacenter with the secondary deployed in the public cloud. Because the public cloud instance would be needed only during planned maintenance of the primary or in the event of its failure—conditions that can be fairly quickly remedied—the service limitations and exclusions cited above may well be acceptable for all but the most mission-critical of applications.

This three-node SANless failover cluster has one active and two standby server instances, making it capable of handling two concurrent failures with minimal downtime and no data lossThe figure shows an enhanced three-node SANless failover cluster that affords both five-9’s high availability and robust disaster recovery protection. As with the two-node cluster, this configuration will also work in a private, public or hybrid cloud environment. In this example, servers #1 and #2 are located in an enterprise datacenter with server #3 in the public cloud. Within the datacenter, replication across the LAN can be fully synchronous to minimize the time it takes to complete a failover and, therefore, maximize availability.

When properly configured, three-node SANless failover clusters afford truly carrier-class HA and DR. The basic operation is application-agnostic and works the same for Windows or Linux. Server #1 is initially the primary or active instance that replicates data continuously to both servers #2 and #3. If it experiences a failure, the application would automatically failover to server #2, which would then become the primary replicating data to server #3.

Immediately after a failure in server #1, the IT staff would begin diagnosing and repairing whatever caused the problem. Once fixed, server #1 could be restored as the primary with a manual failback, or server #2 could continue functioning as the primary replicating data to servers #1 and #3. Should server #2 fail before server #1 is returned to operation, as shown, server #3 would become the primary. Because server #3 is across the WAN in the public cloud, data replication is asynchronous and the failover is manual to prevent “replication lag” from causing the loss of any data.

With SANless failover clustering software able to detect all possible failures at the application level, it readily overcomes the CSP limitations and exclusions mentioned above, and makes it possible for this three-node configuration to be deployed entirely within the public cloud. To afford the same five-9’s high availability based on immediate and automatic failovers, servers #1 and #2 would need to be located within a single zone or region where the LAN facilitates synchronous replication.

For appropriate DR protection, server #3 should be located in a different datacenter or region, where the use of asynchronous replication and manual failover/failback would be needed for applications requiring high transactional throughput. Three-node clusters can also facilitate planned hardware and software maintenance for all three servers while providing continuous DR protection for the application and its data.

By offering multiple, geographically-dispersed datacenters, public clouds afford numerous opportunities to improve availability and enhance DR provisions. And because SANless failover clustering software makes effective and efficient use of all compute, storage and network resources, while also being easy to implement and operate, these purpose-built solutions minimize all capital and operational expenditures, resulting in high availability being more robust and more affordable than ever before.

# # #

About the Author

Cassius Rhue is Director of Engineering at SIOS Technology, where he leads the software product development and engineering team in Lexington, SC. Cassius has over 17 years of software engineering, development and testing experience, and a BS in Computer Engineering from the University of South Carolina. 

Speed up recovery process, improve quality and add to contractor credibility

 

By John Anderson, FLIR

Thermal imaging tools integrated with moisture meters can speed up the post-hurricane recovery process, improve repair quality, and add to contractor credibility. A thermal imaging camera can help you identify moisture areas faster and can lead to more accurate inspections with fewer call backs for verification by insurance companies. Many times, a good thermal image sent via email may be sufficient documentation to authorize additional work, leading to improved efficiency in the repair process.

Post-event process

Contractors need to be able to evaluate water damage quickly and accurately after a hurricane or other storm event. This can be a challenge using traditional tools, especially pinless (non-invasive) moisture meters that offer a nondestructive measurement of moisture in wood, concrete and gypsum. Operating on the principle of electrical impedance, pinless moisture meters read wood using a scale of 5 to 30 percent moisture content (MC); they read non-wood materials on a relative scale of 0 to 100 percent MC. [1] While simple to use, identifying damage with any traditional moisture meter alone is a tedious process, often requiring at least 30 to 40 readings. And the accuracy of the readings is only as good as the user’s ability to find and measure all the damaged locations.

Using a thermal imaging camera along with a moisture meter is much more accurate. These cameras work by detecting the infrared radiation emitted by objects in the scene. The sensor takes the energy and translates it into a visible image. The viewer sees temperatures in the image as a range of colors: red, orange and yellow indicate heat, while dark blue, black or purple signifies colder temperatures associated with evaporation or water leaks and damage. Using this type of equipment speeds up the process and tracks the source of the leak—providing contractors with a visual to guide them and confirm where the damage is located. Even a basic thermal imaging camera, one that is used in conjunction with a smart phone, is far quicker and more accurate at locating moisture damage than a typical noninvasive spot meter.

Infrared Guided Measurement (IGM)

An infrared (IR) thermal imaging camera paired with a moisture meter is a great combination. The user can find the cold spots with the thermal camera and then confirm moisture is present with the moisture meter. This combination is widely used today, prompting FLIR to develop the MR176 infrared guided measurement (IGM™) moisture meter. This all-in-one moisture meter and thermal imager allows contractors to use thermal imaging and take moisture meter readings for a variety of post-storm cleanup tasks. These include inspecting the property, preparing for remediation, and—during remediation— assessing the effectiveness of dehumidifying equipment. The tool can also be used down the road after remediation to identify leaks that may—or may not—be related to the hurricane.

During the initial property inspection, the thermal imaging camera visually identifies cold spots, which are usually associated with moisture evaporation. Without infrared imaging, the user is left to blindly test for moisture—and may miss areas of concern altogether.

While preparing for remediation, a tool that combines a thermal imaging camera with a relative humidity and temperature (RH&T) sensor can provide contractors with an easy way to calculate the equipment they will need for the project. This type of tool measures the weight of the water vapor in the air in grains per pound (GPP), relative humidity, and dew point values. Restoration contractors know how many gallons of water per day each piece of equipment can remove and, using the data provided by the meter, can determine the number of dehumidifiers needed in a given space to dry out the area.

The dehumidifiers reduce moisture and restores proper humidity levels, preventing the build-up of air toxins and neutralizing odors from hurricane water damage. Since the equipment is billed back to the customer or insurance company on a per-hour basis, contractors must balance the costs with the need for full area coverage.

During remediation, moisture meters with built-in thermal imaging cameras provide key data that contractors can use to spot check the drying process and equipment effectiveness over time. In addition, thermal imaging can be used to identify areas that may not be drying as efficiently as others and can guide the placement of drying equipment.

The equipment is also useful after the fact, if, for example, contractors are looking to identify the source of small leaks that may or may not be related to the damage from the hurricane. Using a moisture meter/thermal camera combination can help them track the location and source of the moisture, as well as determine how much is remaining.

Remodeling contractors who need to collect general moisture data can benefit from thermal imaging moisture meters, as well. For example, tracing a leak back to its source can be a challenge. A leak in an attic may originate in one area of the roof and then run down into different parts of the structure. A moisture meter equipped with a thermal imager can help them determine where the leak actually started by tracing a water trail up the roof rafter to the entrance spot.

Choosing the right technology

A variety of thermal imaging tools are available, depending upon whether the contractor is looking for general moisture information, or needs more precise information on temperature and relative humidity levels.

For example, the FLIR MR176 IGM™ moisture meter with replaceable hygrometer is an all-in-one tool equipped with a built-in thermal camera that can visually guide contractors to the precise spot where they need to measure moisture. An integrated laser and crosshair helps pinpoint the surface location of the issue found with the thermal camera. The meter comes with an integrated pinless sensor and an external pin probe, which gives contractors the flexibility to take either non-intrusive or intrusive measurements.

Coupled with a field-replaceable temperature and relative humidity sensor, and automatically calculated environmental readings, the MR176 can quickly and easily produce the right measurements during the hurricane restoration and remediation process. Users can customize thermal images by selecting which measurements to integrate, including moisture, temperature, relative humidity, dew point, vapor pressure and mixing ratio. They can also choose from several color palates, and use a lock-image setting to prevent extreme hot and cold temperatures from skewing images during scanning.

Also available is the FLIR MR160, which is a good tool for remodeling contractors looking for general moisture information, for example, pinpointing drywall damage from a washing machine, finding the source of a roof leak that is showing up in flooring or drywall, as well as locating ice dams. It has many of the features of the MR176 but does not include the integrated RH&T sensor.

Capturing images with a thermal camera builds contractor trust and credibility

Capturing images of hurricane-related damage with a thermal camera provides the type of documentation that builds contractor credibility and increases trust with customers. These images help customers understand and accept contractor recommendations. Credibility increases when customers are shown images demonstrating conclusively why an entire wall must be removed and replaced.

When customers experience a water event, proper photo documentation can bolster their insurance claims. The inclusion of thermal images will definitely improve insurance payout outcomes and speed up the process.

Post-storm cleanup tool for the crew

By providing basic infrared imaging functions, in combination with multiple moisture sensing technologies and the calculations made possible by the RH&T sensor, an imaging moisture meter such as the MR176 is a tool the entire remediation crew can carry during post-storm cleanup.

References

[1] Types of Moisture Meters, https://www.grainger.com/content/qt-types-of-moisture-meters-346, retrieved 5/29/18

Expert service providers update aging technology with minimal disruption

 

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

Aging power control and automation systems can carry risk, both in terms of downtime of mission-critical power systems, through reduced availability of replacement components and the knowledge to replace existing devices within. Of course, as components age, their risk of failure increases. Additionally, as technology advances, these same components are discontinued and become unavailable, and over time, service personnel lose the know‐how to support the older generation of products. At the same time, though, complete replacement of these aging systems can be extremely expensive, and may also require far more downtime or additional space than these facilities can sustain.

The solution, of course, is the careful maintenance and timely replacement of power control and automation system components. By replacing only some components of the system at any given time, customers can benefit from the new capabilities and increased reliability of current technology, all while uptime is maintained. In particular, expert service providers can provide in-house wiring, testing, and vetting of system upgrades before components even ship to customers, ensuring minimal downtime. These services are particularly useful in in healthcare facilities and datacenter applications, where power control is mission-critical and downtime is costly.

Automatic Transfer Switch (ATS) controllers and switchgear systems require some different types of maintenance and upgrades due to the differences in their components; however, the cost savings and improved uptime that maintenance and upgrades can provide are available to customers with either of these types of systems. The following maintenance programs and system upgrades can extend the lifetime of a power control system, minimize downtime in mission-critical power systems, and save costs.

Audits and Preventative Maintenance

Before creating a maintenance schedule or beginning upgrades, getting an expert technician into a facility to audit the existing system provides long-term benefits and provides the ability to prioritize. With a full equipment audit, a technician or application engineer who specializes in upgrading existing systems can look at an existing system and provide customers with a detailed migration plan for upgrading the system, in order of priority, as well as a plan for preventative maintenance.

Whenever possible, scheduled preventative maintenance should be performed by factory-trained service employees of the power control system OEM, rather than by a third party. In addition to having the most detailed knowledge of the equipment, factory-trained service employees can typically provide the widest range of maintenance services. While third-party testing companies may only maintain power breakers and protective relay devices, OEM service providers will also maintain the controls within the system.

Through these system audits and regular maintenance plans, technicians can ensure that all equipment is and remains operational, and they can identify components that are likely to become problematic before they actually fail and cause downtime in a mission-critical system.

Upgrades for ATS Control Systems with Minimal System Disruption

In ATS controller systems, control upgrades can provide customers with greater power monitoring and metering. In addition, replacing the controls for aging ATS systems ensures that all components of the system controls are still in production, and therefore will be available for replacement at a reasonable cost and turnaround time. In comparison, trying to locate out-of-production components for an old control package can lead to high costs and a long turnaround time for repairs.

The most advanced service providers minimize downtime during ATS control by pre-wiring the control and fully testing it within their own production facilities. When Russelectric performs ATS control upgrades, a pre-wired, fully-tested control package is shipped to the customer in one piece. The ATS is shut down only for as long as it takes to install the new controls retrofit, minimizing disruption.  

In addition, new technology also improves system usability, similar to making the switch from a flip phone to a smartphone. New ATS controls from Russelectric, for example, feature a sizeable color screen with historical data and alarm reporting. All of the alerts, details and information on the switch are easily accessible, providing the operator with greater information when it matters most. This upgrade also paves the way for optional remote monitoring through a SCADA or HMI system, further improving usability and ease of system monitoring.

Switchgear System upgrades

For switchgear systems, four main upgrades are possible in order to improve system operations and reliability without requiring a full system replacement: operator interface upgrades, PLC upgrades, breaker upgrades, and controls retrofits. Though each may be necessary at different times for different power control systems, all four upgrades are cost-effective, extend system lifespans, and minimize downtime.

Operator Interface Upgrades for Switchgear Systems

Similar to the ATS control upgrade, an operator interface (OI) or HMI upgrade for a switchgear power control system can greatly improve system usability, making monitoring easier and more effective for operators. This upgrade enables operators to see the system power flow, as well as to view alarms and system events in real time.

Also similar to ATS control upgrades, upgrading the OI also ensures that components will be in production and easily available for repairs. The greatest benefit, though, is providing operators real-time vision into system alerts without requiring them to walk through the system itself and search for indicator lights and alarms. Though upgrading this interface does not impact the actual system control, it provides numerous day-to-day benefits, enabling faster and easier troubleshooting and more timely maintenance.

Upgrades to PLC and Communication Hardware without Disrupting Operations

Many existing systems utilize legacy or approaching end-of-life PLC architecture. PLC upgrades allow for upgrading a switchgear control system to the newest technology with minimal program changes. Relying on expert OEM service providers for this process can also simplify the process of upgrading PLC and communications hardware, protecting customers’ investments in power control systems while extending noticeable system benefits.

A PLC upgrade by Russelectric includes all new PLC and communication hardware for the controls of the existing system, but maintains the existing logic and converts it for the latest technology. Upgrading the technology does not require new logic or operational sequences. As a result, the operations of the system remain unchanged and existing wiring is maintained. This greatly reduces the likelihood that the system will need to be fully recommissioned and minimizes downtime necessary for testing. Russelectric’s unique process of both converting existing logic and, as previously mentioned, testing components in their own production facility before sending out to the facility for installation, gives them a correspondingly unique ability to keep a system operational through the entire upgrade process.  In addition, Russelectric has developed some very unique processes for installation, using a sequence to systematically replace the PLC’s, replacing only one PLC at a time, and converting the communications from PLC to PLC as components are replaced.  This allows Russelectric to keep systems operational throughout the process. Russelectric’s experts minimize the risk of mission-critical power system downtime.

Breaker & Protective Relay Upgrades for Added Reliability and Protection

Breaker upgrades may often be necessary to ensure system protection and reliability, even through many years of normal use. Two different types of breaker modifications or upgrades are available for switchgear power control systems: breaker retrofill and breaker retrofit.  A retrofill breaker upgrade calls for an entirely new device in place of an existing breaker system. Retrofill upgrades maintain existing protections, lengthen service life, and provide added benefits of power metering and other add-on protections, like arc flash protections and maintenance of UL approvals.

Breaker retrofits can provide these same benefits, but they do so through a process of reengineering an existing breaker configuration. This upgrade requires a somewhat more labor-intensive installation, but provides generally the same end result. Whether a system requires a retrofit or retrofill upgrade is largely determined by the existing power breakers in a system.

For medium voltage systems, protective relay upgrades from single function solid state or mechanical protective devices to multifunction protective devices provide protection and reliability upgrades to a system.  Upgrading to multifunction protective relays provide enhanced protection, lengthen service life of a system, and provide added benefits of power metering, communications and other add-on protections, like arc flash protections.

Russelectric prewires and tests new doors with the new protective devices ready for installation.  This allows for minimal disruption to a system and allows for easy replacement.   

Controls Retrofits Revive Aging Systems

For older switchgear systems that predate PLC controls, one of the most effective upgrades for extending system life and serviceability is a controls retrofit. This process includes a fully new control interior, interior control panels, and doors. This enables customers to replace end-of-life components, update to the latest control equipment and sequence standards, and access benefits of visibility described above for OI upgrades. 

The major consideration and requirement is to maintain the switchgear control wiring interconnect location to eliminate the requirement for new control wiring between other switchgear, ATS’s, and generators.  In retrofitting controls rather than replacing, retrofitting the controls allows the existing wiring to be maintained and provides a major cost savings to the system upgrade. 

Just as with ATS controls retrofits, Russelectric builds the control panels and doors within their own facilities and simulate non-controls components from the customer’s system that are not being replaced. In doing so, technicians can fully test the retrofit before replacing the existing controls. What’s more, Russelectric can provide customers with temporary generators and temporary control panels so that the existing system can be strategically upgraded, one cubicle at a time, while maintaining a fully operational system.

Benefits of an Expert Service Provider

As described throughout this article, relying on expert OEM service providers like Russelectric amplifies the benefits of power control system upgrades. With the right service provided at the right time by industry experts, mission-critical power control systems, like those in healthcare facilities and datacenters, can be upgraded with a minimum of downtime and costs. OEMs are often the greatest experts on their own products, with access to all of the drawings and documentation for each product, and are therefore most able to perform maintenance and upgrades in the most effective and efficient manner.

Some of the most important cost-saving measures for power control system upgrades can only be achieved by OEM service providers. For example, maintaining existing interconnect control wiring between power equipment and external equipment provides key cost savings, as it eliminates the need for electrical contractors in installing a new system. Given that steel and copper substructure hardware can greatly outlast control components, retrofitting these existing components can also provide major cost savings. Finally, having access to temporary controls or power sources, pre-tested components, and the manufacturer’s component knowledge all helps to practically eliminate downtime, saving costs and removing barriers to upgrades. By upgrading a power control system with an OEM service provider, power system customers with mission-critical power systems gain the latest technology without the worry of downtime and huge costs associated with full system replacement.

This document gives guidelines for monitoring hazards within a facility as a part of an overall emergency management and continuity programme by establishing the process for hazard monitoring at facilities with identified hazards.

It includes recommendations on how to develop and operate systems for the purpose of monitoring facilities with identified hazards. It covers the entire process of monitoring facilities.

This document is generic and applicable to any organization. The application depends on the operating environment, the complexity of the organization and the type of identified hazards.

...

https://www.iso.org/standard/67159.html

By GREG SPARROW

In the wake of the recent Facebook and Cambridge Analytica scandal, data and personal privacy matters have come to the forefront of consumer’s minds. When an organization like Facebook falls into trouble, big data is often blamed, but IS big data actually at fault? When tech companies utilize and contract with third party data mining companies aren’t these data collection firms doing exactly what they were designed to do?

IBM markets its Watson as a way to get closer to knowing about consumers; however, when it does just that, it is perceived as an infringement on privacy. In lieu of data privacy and security violations, companies have become notorious for pointing the finger elsewhere. Like any other scapegoat, big data has become an easy way out; a chance for the company to appear to side with, and support the consumer. Yet, many are long overdue in making changes that actually do protect and support the customer and now find themselves needing to attempt to earn back lost consumer trust. Companies looking to please their customers, publicly agree that big data is the issue but behind the scenes may be doing little or nothing to change how they interact with these organizations. By pushing the blame to these data companies, they redirect the problem, holding their company and consumers as victims of something beyond their control.

For years, data mining has been used to help companies better understand their customers and market environment. Data mining is a means to offer insights from business to buyer or potential buyer. Before companies and resources like Facebook, Google, and IBM’s Watson existed, customers knew very little about their personal data. More recently, the general public has begun to understand what data mining actually is, how it is used, and be aware of the data trail they leave through their online activities.

Hundreds of articles have been written surrounding data privacy, additional regulations to protect individual’s data rights have been proposed, and some even signed into law. With the passing of new legislation pertaining to data, customers are going as far as to file law suits against companies that may have been storing personal identifiable information against their knowledge or without proper consent.

State regulations have increasingly propelled the data privacy interest, calling for what some believe might develop into national privacy law. Because of this, organizations are starting to take notice and thus have begun implementing policy changes to protect their organization from scrutiny. Businesses are taking a closer look at the changing trends within the marketplace, as well as the growing awareness from the public around how their data is being used. Direct consumer-facing brands need to be most mindful of the fact that they need to have appropriate security frameworks in place. Perhaps the issue amongst consumers is not the data collected, but how it is presented back to them or shared with others.

Generally speaking, consumers like content and products that are tailored to them. Many customers don’t mind data collection, marketing retargeting, or even promotional advertisements if they know that they are benefiting from them. We as consumers and online users often times willingly give up our information in exchange for free access and convenience, but have we thoroughly considered how that information is being used, brokered and shared? If we did, would we pay more attention to who and what we share online?

Many customers have expressed their unease when their data is incorrectly interpreted and relayed. Understandably so, they are irritated by irrelevant communications and become fearful when they lack trust in the organization behind the message. Is their sensitive information now in a databank with heightened risk for breach? When a breach or alarming infraction occurs, the customer, including prospective, has more concern.

The general public has become acquainted with the positive aspects of big data, to the point where they expect retargeted ads and customized communications. On the other hand, even when in agreeance to the terms and conditions, the consumer is quick to blame big data in a negative occurrence rather than the core brand they chose to trust their information to.

About Greg Sparrow:

GregSparrowGreg Sparrow, Senior Vice President and General Manger at CompliancePoint has over 15 years of experience with Information Security, Cyber Security, and Risk Management. His knowledge spans across multiple industries and entities including healthcare, government, card issuers, banks, ATMs, acquirers, merchants, hardware vendors, encryption technologies, and key management.

 

About CompliancePoint:

CompliancePoint is a leading provider of information security and risk management services focused on privacy, data security, compliance and vendor risk management. The company’s mission is to help clients interact responsibly with their customers and the marketplace. CompliancePoint provides a full suite of services across the entire life cycle of risk management using a FIND, FIX & MANAGE approach. CompliancePoint can help organizations prepare for critical need such as GDPR with project initiation and buy-in, strategic consulting, data inventory and mapping, readiness assessments, PIMS & ISMS framework design and implementation, and ongoing program management and monitoring. The company’s history of dealing with both privacy and data security, inside knowledge of regulatory actions and combination of services and technology solutions makes CompliancePoint uniquely qualified to help our clients achieve both a secure and compliant framework.

https://blog.sungardas.com/2018/10/machine-learning-cartoon-its-time-to-study-up-for-the-next-wave-of-innovation/

IT cartoon, machine learning

Successful companies understand they have to innovate to remain relevant in their industry. Few innovations are more buzzworthy than machine learning (ML).

The Accenture Institute for High Performance found that at least 40 percent of the companies surveyed were already employing ML to increase sales and marketing performance. Organizations are using ML to raise ecommerce conversion rates, improve patient diagnoses, boost data security, execute financial trades, detect fraud, increase manufacturing efficiency and more.

When asked which IT technology trends will define 2018, Alex Ough, CTO Architect at Sungard AS, noted that ML “will continue to be an area of focus for enterprises, and will start to dramatically change business processes in almost all industries.”

Of course, it’s important to remember that implementing ML in your business isn’t as simple as sticking an educator in front of a classroom of computers – particularly when companies are discovering they lack the skills to actually build machine learning systems that work at scale.

Machine learning, like many aspects of digital transformation, requires a shift in people, processes and technology to succeed. While that kind of change can be tough to stomach at some organizations, the alternative is getting left behind.

Check out more IT cartoons.

 

IT security cartoon

What is the price of network security? If your company understands we live in an interconnected world where cyber threats are continuously growing and developing, no cost is too great to ensure the protection of your crown jewels.

However, no matter how many resources you put into safeguarding your most prized “passwords,” the biggest threat to your company’s security is often the toughest to control – the human element.

It’s not that your employees are intentionally trying to sabotage the company. But, even if you’ve locked away critical information that can only be accessed by passing security measures in the vein of “Mission Impossible,” mistakes happen. After all, humans are only human.

The best course of action is to educate employees on the importance of having good cybersecurity hygiene. Inform them of the potential impacts of a cybersecurity incident, train them with mock phishing emails and other security scenarios, and hold employees accountable.

Retina scanners, complex laser grids and passwords stored in secure glass displays seem like adequate enough security measures. Unfortunately, employees don’t always get the memo that sensitive information shouldn’t be shouted across the office. Then again, they’re only human.

Check out more IT cartoons.

https://blog.sungardas.com/2018/09/it-security-cartoon-why-humans-are-cybersecuritys-biggest-adversary/

Complex system provided by Russelectric pioneers microgrid concept

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

PV RooftopA unique power control system for Quinnipiac University’s York Hill Campus, located in Hamden, Connecticut, ties together a range of green energy power generation sources with utility and emergency power sources. The powerful supervisory control and data acquisition (SCADA) system gives campus facilities personnel complete information on every aspect of the complex system. Initially constructed when the term microgrid had barely entered our consciousness, the system continues to grow as the master plan’s vision of sustainability comes into fruition.

Hilltop campus focuses on energy efficiency and sustainability

In 2006, Quinnipiac University began construction on its New York Hill campus, perched high on a hilltop with stunning views of Long Island Sound. Of course, the campus master plan included signature athletic, residence, parking, and activity buildings that take maximum advantage of the site. But of equal importance, it incorporated innovative electrical and thermal distribution systems designed to make the new campus energy efficient, easy to maintain, and sustainable. Electrical distribution requirements, including primary electrical distribution, emergency power distribution, campus-wide load shedding, and cogeneration were considered, along with the thermal energy components of heating, hot water, and chilled water.

The final design includes a central high-efficiency boiler plant, a high-efficiency chiller plant, and a campus-wide primary electric distribution system with automatic load shed and backup power. The design also incorporates a microturbine trigeneration system to provide electrical power while recovering waste heat to help heat and cool the campus. Solar and wind power sources are integrated into the design. The York Hill campus design engineer was BVH Integrated Services, PC, and Centerbrook Architects & Planners served as the architect. The overall campus project won an award for Best Sustainable Design from The Real Estate Exchange in 2011.

Implementation challenges for the complex system

The ambitious project includes numerous energy components and systems. In effect, it was a microgrid before the term was widely used. Some years after initial construction began, Horton Electric, the electrical contractor, brought in Russelectric to provide assistance and recommendations for all aspects of protection, coordination of control, and utility integration – especially protection and control of the solar, wind and combined heating and power (CHP) components. Russelectric also provided project engineering for the actual equipment and coordination between its system and equipment, the utility service, the emergency power sources, and the renewable sources. Alan Vangas, current VP at BVH Integrated Services, said that “Russelectric was critical to the project as they served as the integrator and bridge for communications between building systems and the equipment.”

Startup and implementation was a complex process. The power structure system infrastructure, including the underground utilities, had been installed before all the energy system components had been fully developed. This made the development of an effective control system more challenging. Some of the challenges arose from utility integration with existing on-site equipment, in particular the utility entrance medium voltage (MV) equipment that had been installed with the first buildings. Because it was motor-operated, rather than breaker-operated, paralleling of generator sets with the utility (upon return of the utility source after power interruption) was not possible in one direction. They could parallel the natural gas generator to the utility, but the generator was also used for emergency power, so they could not parallel from the utility back to their microgrid.

Unique system controls all power distribution throughout the campus

In response to the unique challenges, Russelectric designed, delivered, and provided startup for a unique power control system, and has continued to service the system since startup. The system controls all power distribution throughout the campus, including all source breakers – utility (15kV and CHP), wind, solar, generators, MV loop bus substations, automatic transfer switches (ATSs), and load controls.

As might be expected, this complex system requires a very complex load control system. For example, it has to allow the hockey rink chillers to run in the summer during an outage but maintain power to the campus. 

Here is the complete power control system lineup:

  • 15 kilovolt (kV) utility source that feeds a ring bus with 8 medium voltage/low voltage (MV/LV) loop switching substations for each building. Russelectric controls the open and close of the utility main switch and monitor’s the utility main’s health and protection of the utility main.
  • 15kV natural gas 2 megawatt (MW) Caterpillar CAT generator with switchgear for continuous parallel to the 15kV loop bus. Russelectric supplied the switchgear for full engine control and breaker operations to parallel with the utility and for emergency island operations.
  • One natural gas 750kW Caterpillar generator used for emergency backup only.
  • One gas-fired FlexEnergy micro turbine (Ingersoll Rand MT250 microturbine) for CHP distributed energy and utility tie to the LV substations. 
  • Control and distribution switchgear that controls the emergency, CHP, and utility. 
  • 12 ATSs for emergency power of 4 natural gas engines in each building. 
  • 25 vertical-axis wind turbines that generate 32,000 kilowatt-hours of renewable electricity annually. The wind turbines are connected to each of the LV substations. Russelectric controls the breaker output of the wind turbines and instructs the wind turbines when to come on or go off.
  • 721 rooftop photovoltaic panels gathering power from the sun, saving another 235,000 kilowatt-hours (kWh) per year. These are connected to each of the 3 dormitory LV substations. Russelectric controls the solar arrays’ breaker output and instructs the solar arrays when to come on or go off.

The system officially only parallels the onsite green energy generation components (solar, wind and micro turbine) with the utility, although they have run the natural gas engines in parallel with the solar in island mode for limited periods.

Since the initial installation, the system has been expanded to include additional equipment, including another natural gas generator, additional load controls, and several more ATSs.

SCADA displays complexity and detail of all the systems

Another feature of the Russelectric system for the project was the development of the Russelectric SCADA system, which takes the complexity and detail of all the systems and displays it for customer use. Other standard SCADA systems would not have been able to tie everything together – with one line diagrams and front views of equipment that provide the ability to visually see the entire system.

While the Russelectric products used are known for their quality and superior construction, what really made this project stand out is Russelectric’s ability to handle such an incredibly wide variety of equipment and sources without standardizing on the type of generator or power source used. Rather than requiring use of specific players in the market, the company supports any equipment the customer wishes to use – signing on to working through the challenges to make the microgrid work. This is critical to success when the task is controlling multiple traditional and renewable sources.

By HANK YEE

https://www.anexinet.com/blog/disaster-recovery-components-within-policies-and-procedures/

While many would consider a discussion about disaster recovery policies and procedures boring (I certainly don’t), in reality, policies and procedures are 110% vital to a successful DR. Your organization could have the greatest technology in the world, but without a solid plan and policy guide in place, your disaster recovery efforts are doomed to fail.

A tad hyperbolic, perhaps. But the lack of properly updated documentation is one of the biggest flaws I see in most companies’ DR plans.

A disaster recovery plan is a master plan of a company’s approach to disaster recovery. It includes or references items like runbooks, test plans, communications plan, and more. These plans detail the steps an organization will take before, during, and after a disaster, and are usually related specifically to technology or information. Having it all written down ahead of time helps streamline complex scenarios, ensures no steps are missing from each process, and provides guidance around all elements associated with the DR plan (e.g. runbooks and test plans).

Creating a plan also provides the opportunity for discussion around topics that have likely not been considered before or are assumed to be generally understood.

*Which applications or hardware should be protected?
*When, specifically, should a disaster be declared, who can make that declaration, and who needs to be notified?
*Have response-tiers been identified depending on the type of disaster?

*Which applications correspond to each tier?

The most critical condition of a successful DR plan is that it be kept updated and current—frequently. An outdated DR plan is a weak DR plan. Applications change. Hardware changes. And organizations change, both in terms of people and locations. Dealing with a disaster is hard enough, but no one needs the added pressure of trying to correlate an outdated organization chart with a current one. Or trying to map old server names and locations to existing ones. Pick a time-metric and a change-metric for when your DR plan will be update (e.g. every six months, every year, upon a major application update to a mission-critical system). Pick some conditions and stick to them.

1) Runbooks
Runbooks are step-by-step procedure guides for select tasks within an IT organization. These reference guides are tailored to describe how your organization configured and implemented a specific technology or software and focuses on the tasks the relevant teams would need to perform in the event of a disaster.

Examples:
*How to startup or shutdown an application/database/server.
*How to fail-over a server/database/storage array to another site.
*How check if an application/database has started-up correctly.

The goal is to make your runbooks detailed enough that any proficient IT professional could successfully execute the instructions, regardless of their association with your organization. A runbook can consist one big book, or several smaller ones. They can be physical or electronic (or both). Ideally, they are stored in multiple locations.

Nobody likes documentation. But in a disaster, emotions and stress can run very high. So why leave it all up to memory? Having it all documented gives you a reliable back-up option.

Depending on the type of disaster, it’s possible the necessary staff members wouldn’t be able to get online, specifically the person who specializes in Server X, Y, or Z. Perhaps the entire regional team is offline, and a server/application has failed. A Linux admin is available, but he doesn’t support this server day in and day out. Now suddenly, he’s tasked with starting up the server and applications. Providing this admin with guide on what to do, what scripts to call, and in what order, might just be the thing that literally saves your company.

And if your startup is automated—first off, great. But how do you check to be sure everything started up correctly? Which processes should be running? Or what log to check for errors? Is there a status code that can referenced? Maybe this is a failover scenario: the server is no longer located in Philadelphia, and as such, certain configuration values need to be changed. Which values are they and what should they be changed to?

Runbooks leave nothing to memory or chance. They are the ultimate reference guide and as such should detail each detail of your organization’s DR plan.

2) Test Plans
Test Plans are documents that detail the objects, resources, and processes necessary to test a specific piece of software or hardware. Like runbooks, they serve as a framework or guideline to aide in testing, and can help eliminate the unreliable memory-factor from the disaster equation. Usually, test plans are synonymous with Quality Assurance departments. But in a disaster, they can be a massive help in organization and accuracy.

Test Plans catalog the test’s objectives, and the steps needed to test those objectives. They also define acceptable pass/fail criteria, and provide a means of documenting any deviations or issues encountered during testing. They are generally not as detailed as runbooks, and in many cases will reference the runbooks required for a specific step. 

3) Crisis Communication Plan
A Crisis Communication Plan outlines the basics of who, what, where, when, and how information gets communicated in a crisis. As with the above, the goal of a Crisis Communication Plan is to get many items sorted out beforehand, so they don’t need to be made up and/or decided upon in the midst of a trying situation. Information should be communicated accurately and consistently, and made available to everyone who needs it. This includes not only technical engineers but also your Marketing or Public Relations teams.

Pre-defined roles and responsibilities help alleviate the pressure on engineers to work in many different directions at once and can allow them to focus on fixing the problems while providing a nexus for higher-level managers to gather information and make decisions.
 
Remember, the best DR plans prepare your organization before, during and after a disaster,  are focused equally on people as well as data and computers, and its creators have taken the time to and money to test, implement, and update it over time – engaging the entire company for a holistic approach.

Hank YeeAs an Anexinet Delivery Manager in Hybrid IT & Cloud Services, Hank Yee helps design, implement and deliver quality solutions to clients. Hank has over a decade of experience with Oracle database technologies and Data Center Operations for the Pharmaceutical industry, with a focus on disaster recovery, and datacenter and enterprise data migrations.

BY BOB GRIES

https://www.anexinet.com/blog/3-key-disaster-recovery-components-infrastructure/

Disaster Recovery (DR) is a simple concept that unfortunately gets quite complex real quick. At a high level, disaster recovery ensures the persistence of critical aspects of your business during or following a disaster, whether natural or man-made. How does one achieve this persistence? That’s where things can become very complex.

With regard to DR Infrastructure, when most people talk DR they want to get right into the specific nitty-gritty: what are the most optimal data protection parameters? What’s the most ideal configuration for a database management/monitoring solution? And that’s all well and good, but let’s worry about the cake first, and the frosting later.

So, let’s take it up a few levels. Within Infrastructure, you have the systems, the connectivity, and the data.

1. Systems
Production Systems
These include servers, compute power, and non-human workhorses. You use these to process your orders, make decisions, and process your business-critical data. They may be physical, virtual, or in the cloud, but you know each one by name. You start your day by logging into one and every step of your workday involves some server doing something to assist you. Without it, you lose your customer-facing website, you lose your applications, and you lose everything else that makes your business an efficient organization.

Disaster Recovery Systems
If your production systems had a twin, the DR Systems would be it. These are duplicate, regularly tested, and fully capable systems, able to take over all the work you depend on your production systems for, the moment a failure occurs. Ideally, your DR Systems are housed in a different facility than the production system and are able to run at full capacity with no assistance from the production systems.

2. Connectivity
This is how everything talks to one another. Your production systems are connected by at least two separate network switches. If you use a SAN, you will have two separate fabrics. If you use the cloud, your connection to the cloud will also be redundant. Any secondary offices, remote data centers, or remote locations also use redundant network connections. Any replication flows over these lines. Your network provides connectivity to all your production and DR systems, such that end users can access their data, systems, and applications seamlessly, regardless of the state of your environment.

3. Data
The Hot Copy
This is the data your business depends on the active dataset that your applications, users, and databases read and write-to each day. Typically, this data is raid-protected, but further protections are necessary to ensure the data is safe.

The Backup Copy
This data set can exist in many forms, including backup storage array, replicated storage, checkpoints, journaled file systems, etc. It is meant as a low Recovery Point Objective option you can quickly use to restore data to handle non-catastrophic recoveries.

The Offsite Copy
This data is for long-term storage and is usually kept on a different medium than the Hot Copy and Backup Copy, including on tape, on removable media, in the cloud, or on a dedicated backup array. This data should be stored offsite and tested regularly. Additionally, this copy should be able to restore the data independent of any existing infrastructure and can be used to recover from a full disaster.

With those three areas identified, your business may begin holding strategic planning sessions to determine exactly which technologies and DR path are most appropriate for your organization and applications.

Bob GriesBob Gries is a Senior Storage Consultant at Anexinet. Bob has specialized in Enterprise Storage/Backup Design and Implementation Services for over 13 years, utilizing technologies from Dell EMC, HPE, CommVault, Veeam and more.

By SARAH YOUNG

https://www.anexinet.com/blog/is-your-disaster-recovery-plan-still-sufficient-to-handle-unexpected-disasters/

A “disaster” is defined as a sudden, unexpected event that causes great damage and disruption to the functioning of a community and results in material, economic and environmental loss that strains that community’s resources. Disasters may occur naturally or be man-made. They may range in damage, from trivial—causing only brief delays and minimal loss—to catastrophic, costing hundreds of thousands to fully recover from.

The Insurance Information Institute asserts that in 2016 alone, natural catastrophes accounted for $46 billion in insured losses worldwide, while man-made disasters resulted in additional losses of approximately $8 billion.

At some point, we all experience a disaster: car accidents, fires, floods, tornados, job loss, etc. When it comes to routine or common disasters, we generally have a good idea what our recovery plan should be. If a pipe breaks, causing a flood, you call a plumber to fix the pipe and maybe a cleaning service to mop up. When disaster strikes a business, standard plans should be in place to quickly recover critical assets so as not to interrupt essential computer systems and production.

Meanwhile, the typical enterprise IT team is constantly on guard for the standard sources of disaster: power outages, electrical surges, and water damage that has the potential to cripple data centers, destroy records, halt revenue-generating apps, and cause business activities to freeze. Since these types of disasters are so common, we’ve developed ways to recover from them quickly, we developed plans of action. But what about disasters we’ve never encountered or haven’t prepared for? Are we sure our recovery plans will save us from incurring huge costs, especially in the case of disasters we can’t predict?

In the last two decades, unforeseen disasters have hit 29 states, causing catastrophic problems for companies. Two planes crash into buildings in lower Manhattan, wiping out major data centers. Multi-day city-wide blackouts result in massive data loss. Hurricanes force cities to impose a mandatory closure of all non-essential work. These disasters not only created IT nightmares, they also exposed a whole host of DR-related issues companies had not yet even considered.

Business leaders forget how hard it is to think clearly under the stress and pressure of a sudden and unexpected event. Often, a sense of immunity or an indifference to disasters prevails— specifically catastrophic events, since these types of disasters, tend to be rare or unpredictable, so no sense in pouring money into a one-off DR plan for a disaster that has a slim chance of ever occurring, right? Wrong.

A standard DR plan provides for after a disaster has occurred. The best disaster recovery plan takes a holistic approach, preparing your company before, during, and after disaster strikes. Disaster recovery is as much about your people as it is about your data and computers. It’s about having a crisis communication plan (and about having plans, period). It’s about taking the time and spending the money, to test and implement your DR plans. From dedicated DR personal and DR checks to plan updates and documentation, an effective DR plan needs to engage the entire company.

So what should your DR plan look like? How will you know when it’s ready? How do you keep your DR plan from failing? Proper planning, design, and implementation of a solid DR plan can mean the difference between a downtime that lasts for days to one that’s resolved in under an hour.

Sarah YoungAs an Anexinet Project Manager in Cloud & Hybrid IT Services, Sarah Young partners with clients and engineers to ensure projects are delivered effectively and efficiently while meeting all shareholder expectations. Having deftly handled complex client issues for over a decade, Sarah excels at translating technical requirements for audiences who may not be as technically fluent.

By STEVE SILVESTRI

https://www.anexinet.com/blog/6-best-practices-for-business-continuity-and-disaster-recovery-planning/


These days, organizations must be prepared for everything and anything: from cyber-threats to natural disasters. A BC/DR plan is your detailed process foundation, focused on resuming critical business functionality while minimizing losses in revenue (or other business operations).
Business leaders forget how hard it is to think clearly under the intense pressure of a sudden and unexpected disaster event, especially one that has the potential to severely impact the success of an organization. With the number of threat vectors looming today, it’s critical to protect your organization against future threats and prepare for recovery from the worst. Below are six best practice tips for creating a BC/DR plan that encompasses all areas of your business.

1. Devise a consistent plan, and ensure all plan components are fully accessible in the event of a major disaster.
You may prepare for weeks or even months, creating the best documentation and establishing resources to run to in a time of crisis. However, if those resources are useless if they’re unavailable when most needed. Many companies document their BC/DR plan in Excel, Visio, Word, or as PDFs. And while this isn’t a bad approach, the files need to be stored in a consistently available location—whether that’s in the cloud, on physical paper, or in a DR planning system. Ensuring unhindered access should be a top priority; an inaccessible BC/DR plan is just as bad as not having a plan at all.

2. Maintain full copies of critical data OUTSIDE your production region.
If your organization keeps its primary data center in Houston, don’t build a secondary backup data center 30 miles down the road. Recent events have taught us that closely located data centers are all severely impacted by disaster, and business services and data availability are hindered across nearby locations.
A general rule for maintaining a full copy of critical data and services is to keep it at least 150 miles from the primary data center. Of course, cases may exist where keeping a secondary data center close to its primary is recommended. However, these cases should be assessed by an expert consultant prior to pursuing this approach.

3. Keep your BC/DR plan up to date and ensure any production changes are reflected.
A lot may change between the inception of your BC/DR plan and the moment disaster strikes. For this reason, it should be a priority for your organization to maintain an up-to-date plan as production changes come into play.

Consider: your organization has successfully implemented a new plan, with recovery points and time all proven to work. Six months later, you’ve deployed a new application system that runs in the cloud instead of on-premise. Without an updated BC/DR plan, all your hard work would have been for nothing since you wouldn’t be able to quickly recover anything. Keeping your plan in alignment with the production environment, and practicing change management are important methods for staying on top of your latest additions.

4. Test your plan in a realistic way to make sure it works.
Without testing, a plan will never have successful execution to back itself up. In the chaos of a crisis, your untested plan will likely fail since people won’t know which parts of the plan work and which don’t. Your testing should encompass all possibilities—from a small process failing, to the entire facility being wiped out by a tornado. Included with these tests should be detailed explanations describing what’s working in the plan and what isn’t. These will develop and mature your plan over time, until business continuity is maintained even if something small is failing, and your organization doesn’t suffer any losses in revenue or customer trust. Testing also allows for recovery practice training, which will also reduce recovery time when real chaos occurs.

5. Leverage the use of virtualization
Load-balancing and failover systems are becoming more popular in the technology sector as cyber threats and natural disasters continue to affect business operations. Ensuring users are seamlessly transferred to a secondary environment creates the illusion that nothing is actually happening to your environment, allowing users to continue enjoying your services without disruption.

6. Create your plan with the mentality that anything can happen.
Regardless of how many times you test your plan, review each recovery process, or go over the points of failure, something may still go awry when the real thing happens. Always have a trusted team or experienced partner who can assist you in covering any gaps, and swiftly pull your organization out of a jam. Be sure to compose a list of priorities and, for each one, ask yourself: if this fails, what will we need to do to recover? Assume necessary personnel are not available and even make your team trade roles during the recovery period in order to spread awareness. Keep your team innovative and sharp for when something goes wrong so at least one person is aware of the right steps to take in each specific area.

Steve SilvestriSteve Silvestri is a Consultant of Anexinet's ISG team, focusing on Cyber Security issues, including Data Loss Prevention, Digital Forensics, Penetration Testing, and Incident Response.

With the ever-growing amount of social media platforms, it’s inevitable that you find yourself using at least one form of social media throughout the day. As of 2017, 77% of US adults are on social media; odds are, you are using one of them. In the professional world, social media is a great way to network, build B2B partner relationships and form avenues of communication between other individuals in your industry. Here are some interesting facts about the platform that may boost your professionalism the most, LinkedIn.

As of 2018, LinkedIn has over 500 million members. Of those members, 260 million log-in monthly, and of those monthly users, 40% are daily users. That makes for a great tool to utilize in building beneficial business relationships with others in the business continuity and disaster recovery industry. In fact, amongst Fortune 500 Companies, LinkedIn is the most used social media platform.  Most users of LinkedIn are high-level decision makers who leverage the platform to accomplish a variety of business tasks. Whether its gathering news, marketing, networking, or hiring, the opportunities are endless. Ninety-one percent of executives rated LinkedIn as their number one choice for professionally relevant content. Content consumption has jumped tremendously over recent years, so it’s no longer just person-to-person interaction, it is also useful for reading and sharing business content amongst a large set of people, across many different industries, including business continuity and disaster recovery.

...

https://www.bcinthecloud.com/2018/09/the-power-of-linkedin-for-bc-dr/

Wednesday, 19 September 2018 16:31

The Power of LinkedIn for BC/DR

Combining business continuity and risk management into a single operational process is the most effective way to prepare for the worst.

By ROBERT SIBIK

Bowtie infographicCombining two seemingly unrelated entities to make a better, more useful creation is a keystone of innovation. Think of products like the clock radio and the wheeled suitcase, or putting meat between two slices of bread to make a sandwich, and you can see how effective it can be to combine two outwardly disparate things.

This viewpoint is useful in many scenarios, including in the business realm, especially when it comes to protecting a business from risk. Many companies treat risk management and business continuity as different entities under the same workflows, and that is a mistake; to be optimally effective, the two must be combined and aligned.

Mistaken Approaches

Business continuity traditionally starts with a business impact assessment, but many companies don’t go beyond that, making no tactical plan or strategic decisions on how to reduce impact once they have identified what could go wrong. The risk management process has been more mature, identifying various ways to treat problems, assigning it to someone, and trying to reduce the likelihood of the event occurring, but not doing much to reduce the impact of the event.

Organizations must move beyond simplistic goals of creating a business continuity plan using legacy business continuity/disaster recovery tools, or demonstrating compliance to a standard or policy using legacy governance, risk management and compliance software tools. Those approaches incorrectly move the focus to, “do we have our plans done?” or create a checklist mentality of, “did we pass the audit?” 

In addition to legacy approaches, benchmarking must be avoided, because it can provide misleading conclusions about acceptable risk and appropriate investment, and create a false sense of having a competitive advantage over others in the industry. Even companies in the same industry should have their own ideas about what constitutes risk, because risks are driven by business strategy, process, how they support customers, what they do, and how they do it.

Take the retail industry. Two organizations may sell the same basic product – clothing – but one sells luxury brands and the other sells value brands. The latter store’s business processes and strategies will focus on discounts and sales as well as efficiencies in stocking and logistics. The former will focus on personalized service and in-store amenities for shoppers. These two stores may exist in the same industry and sell the same thing, but they have vastly different types of merchandise, prices and clientele, which means their shareholder value and business risks will look very different from each other.

Businesses need to understand levels of acceptable risk in their individual organization and map those risks to their business processes, measuring them based on how much the business is impacted if a process is disrupted. By determining what risks are acceptable, and what processes create a risk by being aligned too closely to an important strategy or resource, leadership can make rational decisions at the executive level on what extent they invest in resilience – based not on theory, but on reality.

Creating an Integrated Approach with the Bowtie Model

Using the bowtie model, organizations can appropriately marry business continuity and risk management practices.

The bowtie model – based on the preferred neckwear of high school science teachers and Winston Churchill – uses one half of the bow to represent the likelihood of risk events and the other half to represent mitigation measures. The middle – the knot – represents a disaster event, which may comprise disruptions like IT services going down, a warehouse fire, a workforce shortage or a supplier going out of business.

To use this model, first, determine every possible disruption to your organization through painstaking analysis of your businesses processes. Then determine the likelihood of each disruption (the left part of the bow), as well as mitigating measures one can take to reduce the impact of the disruption should it occur (the right part of the bowtie).

Consider as an example the disruptive event of a building fire – the “knot” in this case. How likely is it? Was the building built in the 1800s and made of flammable materials like wood, or is it newer steel construction? Are there other businesses in the same building that would create a higher risk of fire, such as a restaurant? Do employees who smoke appropriately dispose of cigarettes in the right receptacle?

On the other half of the bowtie are the measures that could reduce the impact of a building fire, such as ensuring water sources and fire extinguishers throughout the building, testing sprinkler systems, having an alternate workspace to move to if part or all of the office is damaged during a fire, and so on.

The mitigating measures are especially key here, as they aren’t always captured in traditional insurance- and compliance-minded risk assessments. Understanding mitigation measures as well as the likelihood of risk events can change perspectives on how much risk an organization can take, because the organization then will understand what its business continuity and response capabilities are. Mitigation methods like being ready to move to an alternate workspace are more realistic than trying to prevent events entirely; at some point, you can accept the risk because you know how to address the impact.

A Winning Combination

Bob Sibik Fusion HeadshotWhere risk management struggles is where business continuity can shine: understanding what creates shareholder value, what makes an organization unique in its industry among its competitors, and how it distinguishes itself. Alternately, risk management brings a new perspective to the idea of business continuity by focuses on types of disruptions, their likelihoods, and how to prevent them.

To create a panoramic view of where an organization can be harmed if something bad happens, businesses must merge the concepts of business resilience (dependencies, impacts, incident management, and recovery) and risk management (assessment, controls, and effectiveness) and optimize them.

Bringing the two views together and performing holistic dependency mapping of entire ecosystem allows an organization to treat both as a single operational process, bringing data together to create actionable info (based on the “information foundation” the company has created about impacts to business operations that can result from a wide variety of disruptions and risks) to empower decisive actions and positive results.

Using the bowtie method to create this holistic view, companies get the best of both worlds and ensure they understand the possibilities of various disruptions, are taking steps to mitigate the possibilities of disasters, and have prepared their responses to disasters should they strike. This approach to risk management will help keep a business up and running and ensure greater value for shareholders – this year and in years to come.

Fusion♦♦♦

Robert Sibik is senior vice president at Fusion Risk Management. Sibik can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

 
 
Technology Modeling – the eBRP Way
Definition:

Technology modeling is a point-in-time snapshot of an Enterprise’s IT Services – including its dependencies on infrastructure – and interfaces to other services and Business Processes which depend on them.  This organizational Technology Model provides executives the critical decision support they need to understand the impacts of a service disruption.

...

https://ebrp.net/wp-content/uploads/2018/04/Technology-Modeling-the-eBRP-Way.pdf

Tuesday, 11 September 2018 14:48

Technology Modeling – the eBRP Way

Do you know the book “Don’t Sweat the Small Stuff”? Today’s post is about sweating the big stuff.

It lays out the five things that matter most for the success of your organization’s business continuity management (BCM) program.

DETAILS, DETAILS

Most business continuity managers are extremely detail-oriented. They have to be to do their job. If BCM teams don’t sweat the details of what they do, then their work is probably not very good and whatever plans they have made can probably not be relied upon.

However, everyone has the defects of their good points. Sometimes, people who are very detail-oriented can become focused on the wrong or less impactful items.

Imagine that you have been in a fender bender caused by another driver. The detail-oriented person gets out and carefully takes pictures of all of the scrapes on their car caused by the collision. The overly detail-oriented person does the same thing while not realizing that the front half of their car is hanging off a cliff.

By this definition, there are a lot of overly detail-oriented people on BCM teams!

We at MHA have found over the years that many BCM programs are obsessing over minor dents and scrapes at the same time as their programs are hanging off a cliff, so to speak.

With all that in mind, we thought it would be worthwhile to remind you about what really matters when it comes to business continuity management.

...

https://www.mha-it.com/2018/09/business-continuity-management-program/

mind the skills gap match made in it heavenThe hiring process would be so much easier if finding IT personnel was like matching on a dating website. Unfortunately, many candidates and employees lack the technical skills needed to make them “Mr. Right.”

Thanks to shifts in technology, including the implementation of machine learning, new cybersecurity challenges and more, IT decision-makers are realizing the biggest roadblock to achieving digital transformation is the lack of qualified candidates with the right skills to do the job. Luckily, organizations have found several ways to address this dilemma.

Nurturing and developing the skills of your existing employees is one way to deal with the shortage of qualified candidates. By creating a positive work environment that empowers employees to test new technologies and learn new skillsets, organizations are crafting opportunities from within, developing the skills they need and retaining talent through a commitment to education.

Finding a partner for consulting or to fully manage aspects of your IT also has its advantages. Instead of struggling to find candidates that can do the job, you can save time and resources by working with an organization that already possesses the talents you’re searching for. That frees up time for your IT team to focus on more strategic projects.

Whether it’s molding talent from within or cultivating a relationship with a partner, that perfect IT “match” may be closer than you think.

https://blog.sungardas.com/2018/09/mind-the-skills-gap-match-made-in-it-heaven/

Tuesday, 04 September 2018 14:38

Mind the Skills Gap: Match Made in IT Heaven?

Definition:

A Business Impact Analysis (BIA) is the cornerstone of creating a BCM program. Basically, a BIA helps prioritize restoration efforts in the initial response activities following an operational disruption. A secondary objective of a BIA is identification of all operational dependencies to enable successful business restoration.

...

https://ebrp.net/wp-content/uploads/2018/04/eBIA-the-eBRP-Way-1.pdf

Wednesday, 29 August 2018 15:10

eBIA – The eBRP Way

It must be human nature to worry more about serious dangers that are unlikely to happen than moderate ones whose likelihood of happening is high.

This would explain why the term “shark attack” brings up 98 million results on Google and the word “sunburn” brings up only 22 million results, even though the odds of a beachgoer getting attacked by a shark are one in 11.5 million, according to Wikipedia, while the Centers for Disease Control says that half of all people under thirty report having gotten a sunburn in the past year.

The chances of a beachgoer’s getting bit by a shark are less than one in ten million and of someone getting a sunburn are one out of two, but we’re roughly three times more likely to write and post—and presumably talk, think, and worry—about shark attacks.

Sunburn is no joke since serious cases are associated with an increase in skin cancer later in life.

On the other hand, shark attacks are not only potentially catastrophic, they’re also perversely entertaining to think about. Sunburn, not so much.

“SUNBURN PROBLEMS” AND BUSINESS CONTINUITY

We at MHA Consulting have noticed that a similar pattern prevails in business continuity management (BCM).

The BC community focuses a great deal of attention on such high-drama but low-probability scenarios as a hurricane wiping out a data center, a plane crashing into a facility, or an active shooter entering the workplace.

Obviously, all of these do happen, and they are very serious and potentially catastrophic. The responsible BCM program includes plans to handle all of these types of incidents. (Of course, they should focus on the type of impact rather than the particular scenario, as we’ve discussed before.)

But there are many BC problems which are more like a sunburn than shark attacks: they aren’t especially dramatic, but they do bring pain and discomfort and sometimes worse, and they happen almost constantly.

In today’s post, we’ll set forth some of the most common “sunburn problems.”

It’s essential to conduct enterprise risk assessments that look at the most serious potential impacts to the organization. But don’t forget to also consider these more modest but highly likely problems.

...

https://www.mha-it.com/2018/08/sunburn-problems/

Daniel Perrin, Global Solutions Director, Workplace Recovery, IWG

With hurricanes and other natural disasters impacting the U.S., now, more than ever, companies are re-examining their business continuity plans. Traditional workplace recovery strategies haven’t kept pace with modern business needs though. Historically, companies built their strategy around IT. This meant that when disaster stuck, to keep critical staff working, businesses needed access to their data.

The solution was to keep offices near a recovery server ready for when a problem shut the office down. If that happened, businesses would send the 20 or so needed staff to work from space next to the server. That’s the model the industry has followed, but it is a model which is redundant.

Why? There are three main reasons:
  1. Technology has evolved dramatically since most large businesses first developed a workplace recovery strategy. The rise in cloud computing means that data is not housed in one particular place. It can be accessed from anywhere. This means a recovery plan no longer needs to be based entirely on the location of servers. It can be based on what works best for your business at a particular time.
  2. Recovering to one fixed location can be a logistical nightmare – if not ill-advised. Of course, if a small leak in an office has rendered it unusable, you can move staff to a specific, identified back-up office. But, what if your city is flooded or facing another equally significant impact event? Chances are one of two things will occur, if you are dependent for recovery on one specific location. Either your back-up location will also be hit or your people won’t be able to get there. In today’s world, a smart business needs to develop a workplace recovery strategy that is responsive and dynamic. One which can evolve to a live situation.
  3. The traditional financial model of making workplace recovery centers profitable revolves around oversubscribing each one – essentially selling the same “seat” to 10 or so different businesses. This makes sense based on the assumption that different businesses will not need recovery at the same time. But, in the example above – a major incident affecting large swathes of a city – chances are multiple companies will be impacted. Businesses therefore run the risk that at the one time they need the recovery seat they’ve been paying for, someone else may be sitting in it.

 

What makes a dynamic workplace recovery provider?

Primarily, one that offers a network of locations to choose from and offers flexibility to meet customers’ needs. And, a provider that will guarantee you space in any type of emergency, especially ones that impact entire cities.

For example, when Hurricane Harvey hit Texas in 2017, Regus, which provides flexible workspace and is owned by IWG, offered the capacity to ensure that customers could continue working because it had 70 locations in the area. For example, one of our customers wanted to recover to one of our offices in the Woodlands, outside of Houston. This seemed sensible, but as the storm approached it became clear that this client’s employees would not be able to reach the site. We were able, proactively, to contact the customer and adapt their plan in real time, by the minute, recovering them to another location that would not be affected.

Businesses are realizing that workplace recovery plans are critical and that their current plans may not be fit for purpose. It’s a good time for companies to evaluate their plans and ensure that they are working with dynamic partners that have the flexibility to meet their needs.

For more information, visit http://www.iwgplc.com/.

Albert Einstein once stated “The important thing is not to stop questioning. Curiosity has its own reason for existing.” As a recent college graduate, this quote has helped influence my decisions in college and starting a career. I was always very quiet and did not like being outside of my comfort zone…until recently…my curiosity helped me step out of my comfort zone. Being curious and confident was the reason why I graduated in the field of Information Science Technology (IST), and why I chose to intern at BC in the Cloud.

While in college, I faced many important decisions in my life. When I started my freshman year at Penn State I wanted to be a computer scientist and develop software. I’ve always had a great passion in technology and thought this field would be great. It took me about two years to realize that I was losing interest in computer science, the course materials were overly complicated and lacked excitement. But I didn’t want to stop pursuing my passion in technology. Stuck, I felt wondering if I should stay in this field. Then a friend told me about a major called Information Science Technology (IST). What he told me blew my mind because I could learn and enjoy development without taking excessively complex engineering courses. IST breaks up into two sections: Integrations and Application, and Design and Development. Also, this major does not just provide development courses, but courses in networking, telecommunication, cyber security and project management. After learning about this, I became curious, but also afraid. I was afraid that if I decided to change my major, people would think of me as a person who only just works on computers (IT guy stereotype). I ended up following my curiosity and studied IST. And I don’t regret it at all.

...

https://www.bcinthecloud.com/2018/07/keys_to_growing/

In today’s constantly moving and changing world your community needs a mass notification system.

How else will you quickly reach your residents with warnings or instructions during a pending storm? An active shooter scenario? A flash flood across a major highway?

Once a system is purchased, the key to success?  Implementation.  A well-considered implementation will lay the foundation for how effectively the system will operate during a crisis.  Check out these five pro tips for a smooth and stellar implementation.

...

https://www.onsolve.com/blog/top-5-keys-to-implementing-a-great-mass-notification-solution/

According to ISO, risk is defined as the effect of uncertainty on objectives, focusing on the effect of incomplete knowledge of events or circumstances on an organization’s decision- making. For companies that have accepted this definition and are looking to mature their risk programs and enable a risk culture, ISO 31000’s risk management framework is a great place to start. The ISO 31000 principles can help these organizations score the maturity of their risk processes and culture. 

Technology is a critical element of implementing effective risk and decision-making practices because it bridges the communication gap between teams, breaks down departmental silos, facilitates collaboration and information access, and automates tedious tasks. Great technology can’t make up for bad practice but without it, no program will meet the ISO 31000 principles. 

ISO 31000 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions.”

To explain how Resolver believes risk technology can help organizations match ISO’s vision, we break down the 11 principles into groups and share our insight:

...

https://www.resolver.com/blog/iso-31000-principles-technology/

By CONNOR COX, Director of Business Development, DH2i (http://dh2i.com)

In 2017, many major organizations—including Delta Airlines and Amazon Web Services (AWS)—experienced massive IT outages. Despite the reality of a growing number of internationally publicized outages like these, an Uptime Institute survey collected by 451 Research had some interesting findings. While the survey found that a quarter of participating companies experienced an unplanned data center outage in the last 12 months, close to one-third of companies (32 percent) still lack the confidence that they are totally prepared in their resiliency strategy should a disaster such as a site-wide outage occur in their IT environments. 

Cox1Much of this failure to prepare for the unthinkable can be attributed to three points of conventional wisdom when it comes to disaster recovery (DR): 

  • Comprehensive, bulletproof DR is expensive

  • Implementation of true high availability (HA)/DR is extremely complex, with database, infrastructure, and app teams involved

  • It’s very difficult to configure a resiliency strategy that adequately protects both new and legacy applications 

Latency is also an issue, and there’s also often a trade-off between cost and availability for most solutions. These assumptions can be true when you are talking about using traditional DR approaches for SQL Server. One of the more predominant approaches is the use of Always On Availability Groups, which provides management at the database level as well as replication for critical databases. Another traditional solution is Failover Cluster Instances, and you can also use virtualization in combination with one of the other strategies or on its own.

There are challenges to each of these common solutions, however, starting with the cost and availability tradeoff. In order to get higher availability for SQL Server, it often means much higher costs. Licensing restrictions can also come into play, since in order to do Availability Groups with more than a single database, you need to use Enterprise Edition of SQL Server, which can cause costs to rapidly rise. There are also complexities surrounding these approaches, including the fact that everything needs to be the same, or “like for like” for any Microsoft clustering approach. This can make things difficult if you have a heterogeneous environment or if you need to do updates or upgrades, which can incur lengthy outages.

But does this have to be so? Is it possible to flip this paradigm to enable easy, cost-effective DR for heavy-duty applications like SQL Server, as well as containerized applications? Fortunately, the answer is yes—by using an all-inclusive software-based approach, DR can become relatively simple for an organization. Let’s examine the how and why behind why I know this to be true.

Simplifying HA/DR

The best modern approach to HA/DR is one that encapsulates instances and allows you to move them between hosts, with almost no downtime. This is achieved using a lightweight Vhost—really just a name and IP address—in order to abstract and encapsulate those instances. This strategy provides a consistent connection string.

Crucial to this concept is built-in HA—which gives automated fault protection at the SQL Server instance level—that can be used from host to host locally, as well as DR from site to site. This can then be very easily extended to disaster recovery, creating in essence an “HA/DR” solution. The solution relies on a means of being able to replicate the data from site A to site B, while the tool manages the failover component of rehosting the instances themselves to the other site. This gives you many choices around data replication, affording the ability to select the most common array replication, as well as vSAN technology or Storage Replica.

Cox2So with HA plus DR built in, a software solution like this is set apart from the traditional DR approaches for SQL Server. First, it can manage any infrastructure, as it is completely agnostic to underlying infrastructure, from bare metal to virtual machines or even a combination. It can also be run in the cloud, so if you have a cloud-based workload that you want to provide DR for, it’s simple to layer this onto that deployment and be able to get DR capabilities from within the same cloud or even to a different cloud. Since it isn’t restricted in needing to be “like for like,” this can be done for Windows Server all the way back to 2008R2, or even on your SQL Server for Linux deployments, Docker containers, or SQL Server from 2005 on up. You can mix versions of SQL server or even the operating system within the same environment.

As far as implications for upgrades and updates, because you can mix and match, updates require the least amount of downtime. And when you think about the cost and complexity tradeoff that we see with the traditional solutions, this software-based tool breaks that because it facilitates high levels of consolidation. Since you can move instances around, users of this solution on average stack anywhere from 5 to 15 SQL Server instances per server with no additional licensing in order to do so. This understandably results in a massive consolidation of the footprint for management and licensing benefits, enabling a licensing savings of 25 to 60 percent on average.

There is also no restriction around the edition of SQL Server that you must use to do this type of clustering. So, you can do HA/DR with many nodes all on Standard Edition of SQL Server, which can create huge savings compared to having to buy premium software editions. If you’ve already purchased these licenses, you can use them later, reclaiming the licenses for future use.

Redefining DB Availability

How does this look in practice? You can, for example, install this tool on two existing servers, add a SQL Server instance under management, and very simply fail that instance over for local HA. You can add a third node that can be in a different subnet and any distance away from the first two nodes, and then move that instance over to the other site—either manually or as the result of an outage.

By leveraging standalone instances for fewer requirements and greater clustering ability, this software-based solution decouples application workloads, file shares, services, and Docker containers from the underlying infrastructure. All of this requires no standardization of the entire database environment on one version or edition of the OS and database, enabling complete instance mobility from any host to any host. In addition to instance-level HA and near-zero planned and unplanned downtime, other benefits include management simplicity, peak utilization and consolidation, and significant cost savings.

It all comes down to redefining database availability. Traditional solutions mean that there is a positive correlation between cost and availability, and that you’ll have to pay up if you want peak availability for your environment. These solutions are also going to be difficult to manage due to their inherent complexity. But you don’t need to just accept these facts as your only option and have your IT team work ridiculous hours to keep your IT environment running smoothly. You do have options, if you consider turning to an all-inclusive approach for the total optimization of your environment.

In short, the right software solution can help unlock huge cost savings and consolidation as well as management simplification in your datacenter. Unlike traditional DR approaches for SQL Server, this one allows you to use any infrastructure in anyw mix and be assured of HA and portability. There’s really no other way that you can unify HA/DR management for SQL Server, Windows, Linux, and Docker to enable a sizeable licensing savings—while also unifying disparate infrastructure across subnets for quick and easy failover.

 
Cox ConnorConnor Cox is a technical business development executive with extensive experience assisting customers transform their IT capabilities to maximize business value. As an enterprise IT strategist, Connor helps organizations achieve the highest overall IT service availability, improve agility, and minimize TCO. He has worked in the enterprise tech startup field for the past 5 years. Connor earned a Bachelor of Science in Business Administration from Colorado State University and was recently named a 2017 CRN Channel Chief.

 

    

As a Business Continuity practitioner with more than 20 years of experience, I have had the opportunity to see, review and create many continuity and disaster recovery plans. I have seen them in various shapes and sizes, from the meager 35 row spreadsheet to 1,000 plus pages in 3-ring binders. Reading these plans, in most cases, the planners’ intent is very evident – check the  “DR Plans done” box.

There are many different types of plans that are called in to play when a disruption occurs, these could be Emergency Health & Safety, Crisis Management Plans, Business Continuity, Disaster Recovery, Pandemic Response, Cyber Security Incident Response, and Continuity of Operations Plans (COOP) etc.

The essence of all these plans is to define “what” action is to be done, “when” it has to be performed and “who” is assigned the responsibility.

The plans are the definitive guide to respond to a disruption and have to be unambiguous and concise, while at the same time providing all the data needed for informed decision making.

...

https://www.ebrp.net/dr-plans-the-what-when-who/

Wednesday, 02 May 2018 14:15

DR Plans – The What, When & Who

By Tim Crosby

PREFACE: This article was written before ‘Meltdown’ and ‘Spectre’ were announced – two new critical “Day Zero” vulnerabilities that affect nearly every organization in the world. Given the sheer number of vulnerabilities identified in the last 12 months, one would think patch management would be a top priority for most organizations, but it is not the case. If the “EternalBlue” (MS17-010) and “Conflicker” (MS08-067) vulnerabilities are any indication, I have little doubt that I will be finding the “Meltdown” and “Spectre” exploits in my audit initiatives for the next 18 months or longer. This article is intended to emphasize the importance of timely software updates.

“It Only Takes One” – One exploitable vulnerability, one easily guessable password, one careless click, one is all it takes. So, is all this focus on cyber security just a big waste of time? The answer is NO. A few simple steps or actions can make an enormous difference for when that “One” action occurs.

The key step everyone knows, but most seem to forget is keeping your software and firmware updated. Outdated software provides hackers the footholds they need to break into your network as well as privilege escalation and opportunities for lateral movement. During a recent engagement, 2% of the targeted users clicked on a link with an embedded payload that provided us shell access into their network. A quick scan identified a system with a Solaris Telnet vulnerability that was easily exploitable and allowed us to establish a more secure position. The vulnerable Solaris system was a video projector to which no one gave a second thought, even though the firmware update had existed for years. Our scan thru this projector showed SMBv1 traffic so we scanned for “EternalBlue”; targeting 2008 servers due to the likelihood that they would have exceptions to the “Auto Logoff” policy and would be a great place to gather clear text credentials for administrators or helpdesk/privileged accounts. Several of these servers were older HP Servers with HP System Management Home Pages, some servers were running Apache Tomcat with default credentials (should ring a bell – the Equifax Argentina hack), a few running JBoss/JMX and even a system vulnerable with MS09-050.

The vulnerabilities make the above scenario possible have published exploits readily available in the form of free opensource software designed for penetration testing. We used Metasploit Framework to exploit a few of the “EternalBlue” vulnerable systems, followed the NotPetya script and downloaded clear text credentials with Mimikatz. Before our scans completed, we were on a Domain Controller with “System” privileges. The total time from “One careless click” to Enterprise Admin: less than 2 hours.

The key to our success?? Not our keen code writing ability, not a new “Day 0” vulnerability, not a network of super computers, not thousands of IOT devices working in unison, it wasn’t even a trove of payloads we purchased with Bitcoin on the Dark Web. The key was systems vulnerable to widely publicized exploits with widely available fixes in the form of updated software and/or patches. In short, outdated software. We used standard laptops running Kali or Parrot Linux operating systems with widely available free and/or opensource software, most of the which come preloaded on those Linux distributions.

The projector running Solaris is not uncommon, many office devices including printers and copiers have full Unix or Linux operating systems with internal hard drives. Most of these devices go unpatched and therefore make great pivoting opportunities. These devices also provide an opportunity to gather data (printed or scanned documents) and forward them to an external FTP site off hours, this is known as a store and forward platform. The patch/update for the system we referenced above has been available since 2014. Many of these devices also come with WiFi and/or Bluetooth enabled interfaces even when connected directly to the network via Ethernet, making them a target to bypass your firewalls and WPA2 Enterprise security. Any device that connects to your network, no matter how small or innocuous, needs to be patched and/or have software updates applied on a regular basis as well as undergo rigorous system hardening procedures including disabling unused interfaces and changing default access settings. This device with outdated software extended our attack long enough to identify other soft targets. Had it been updated/patched, our initial foothold could have vanished the first-time auto logoff occurred.

Before you scoff or get judgmental believing only incompetent or lazy network administrators or managers could allow this to happen, slow down and think. Where do the patch management statistics for your organization come from? What data do you rely on? Most organizations gather and report patching statistics based on data directly from their patch management platform. Fact – systems fall out of patch management systems or are never added for many reasons, such as: a GPO push failed, a switch outage during the process, systems that fall outside of the patch managers responsibility or knowledge (printers, network devices, video projector, VOIP Systems). Fact – Your spam filter may be filtering critical patch fail reports, this happens far more often than you might imagine.

A process outside of the patching system needs to verify every device is in the patch management’s system and that the system is capable of pushing all patches to all devices. This process can be as simple and cost effective as running and reviewing NMAP scripts on or as complex and automated as commercial products such as Tenable’s Security Center or BeyondTrust’s Retina that can be scheduled to run and report immediately following the scheduled patch updates. THIS IS CRITICAL! Unless you know every device connected to your network; wired, wireless or virtual and where it’s patch/version health status, there are going to be wholes in your security. At the end of this process, no matter what it looks like internally, the CISO/CIO/ISO should be able to answer the following:

  • Did the patches actually get applied?

  • Did the patches undo a previous workaround or code fix?

  • Did ALL systems get patched?

  • Are there any NEW critical or high-risk vulnerabilities that need to be addressed?

There are probably going to be devices that need to be manually patched, there is a very strong likelihood that some software applications are locked into vulnerable versions of Java, Flash or even Windows XP/2003/2000. So, there are devices that will be patched less frequently or not at all. Many organizations simply say, “That’s just how it is until manpower or technology changes - we just accept the risk”.

That may be a reasonable response for your organization, it all depends on your risk tolerance. What about Firewall or VLANs with ACL restriction for devices that can’t be patched or upgraded if you have a lower risk appetite?? Why not leverage virtualization to reduce the security surface area of the that business-critical application that needs to run on an old version of Java or only works on 2003 or XP? Published application technologies from Citrix, Microsoft, VMware or Phantosys fence the vulnerabilities into a small isolated window that can’t be accessed by the workstation OS. Properly implemented, the combination of VLANs/DMZs and Application Virtualization reduces the actual probability of exploit to nearly zero and creates an easy way to identify and log any attempts to access or compromise these vulnerable systems. Once again these are mitigating countermeasure when patching isn’t an option.

We will be making many recommendations to our clients including multi-factor authentication for VLAN access, changes to password length and complexity, and additional VLAN. However, topping the list of suggestions will be patch management and regular internal vulnerability scanning, preferably as the verification step for the full patch management cycle. Keeping your systems patched makes sure when someone makes a mistake and lets the bad guy or malware in – they have nowhere to go and a limited time to get there.

As an ethical hacker or penetration tester, one of the most frustrating things I encounter is spending weeks of effort to identify and secure a foothold on a network only to find myself stuck; I can’t escalate privileges, I can’t make the session persistent, I can’t move laterally, ultimately rendering my attempts unsuccessful. Though frustrating for me, this is the optimal outcome for our clients as it means they are being proactive about their security controls.

Frequently, hackers are looking for soft targets and follow the path of least resistance. To protect yourself, patch your systems and isolate those you can’t. By doing so, you will increase the level of difficulty, effort and time required rendering a pretty good chance they will move on to someone else. There is an old joke about two guys running from a bear, the punch line applies here as well – “I don’t need to be faster that the bear, just faster than you…”

Make sure ALL of your systems are patched, upgraded or isolated with mitigating countermeasure; thus, making you faster than the other guy who can’t outrun the bear.

About Tim Crosby:

Crosby TimTimothy Crosby is Senior Security Consultant for Spohn Security Solutions. He has over 30 years of experience in the areas of data and network security. His career began in the early 80s securing data communications as a teletype and cryptographic support technician/engineer for the United States Military, including numerous overseas deployments. Building on the skillsets he developed in these roles, he transitioned into network engineering, administration, and security for a combination of public and private sector organizations throughout the world, many of which required maintaining a security clearance. He holds industry leading certifications in his field, and has been involved with designing the requirements and testing protocols for other industry certifications. When not spending time in the world of cybersecurity, he is most likely found in the great outdoors with his wife, children, and grandchildren.

Migrating and managing your data storage in the cloud can offer significant value to the business. Start by making good strategic decisions about moving data to the cloud, and which cloud storage management toolsets to invest in.

Your cloud storage vendor will provide some security, availability, and reporting. But the more important your data is, the more you want to invest in specialized tools that will help you to manage and optimize it.

Cloud Storage Migration and Management Overview

First, know if you are moving data into an application computing environment or moving backup/archival data for long-term storage in the cloud.  Many companies start off with storing long-term backup data in the cloud, others with Office 365. Still others work with application providers who extend the application environment to the vendor-owned cloud, like Oracle or SAP. In all cases you need to understand storage costs and information security such as encryption. You will also need to decide how to migrate the data to the cloud.

...

http://www.enterprisestorageforum.com/storage-management/managing-cloud-storage-migration.html

Tuesday, 27 March 2018 05:11

Managing Cloud Storage Migration

Leveraging Compliance to Build Regulator and Customer Trust

Bitcoin and other cryptocurrencies continue to gain ground as investors buy in, looking for high returns, and as acceptance of it as payment takes hold. However, with such growth come risks and challenges that fall firmly under the compliance umbrella and must be addressed in a proactive, rather than reactive, manner.

Cryptocurrency Challenges

One of the greatest challenges faced by the cryptocurrency industry is its volatility and the fact that the cryptocurrency markets are, unlike mainstream currency markets, a social construct. Just as significantly, all cryptocurrency business is conducted via the internet, placing certain obstacles in the path of documentation. The online nature of cryptocurrency leads many, especially regulators, to remain dubious of its legitimacy and suspicious that it is used primarily for nefarious purposes, such as money-laundering and drug trafficking, to name a few.

This leaves companies that have delved into cryptocurrency with an onerous task: building trust among regulators and customers alike, with the ultimate goal of fostering cryptocurrency’s survival. From a regulatory standpoint, building trust involves not only setting policies and procedures pertaining to the vetting of customers and the handling of cryptocurrency transactions and trades, but also leveraging technology to document and communicate them to the appropriate parties. Earning regulators’ trust also means keeping meticulous records rendered legally defensible by technology. Such records should detail which procedures for vetting customers were followed; when, by whom and in what jurisdiction the vetting took place; and what information was shared with customers at every step of their journey.

On the customer side, records must document the terms of all transactions and the messages conveyed to customers throughout their journey. Records of what customers were told regarding how a company handles its cryptocurrency transactions and any measures it takes to ensure the legitimacy of activities connected with transactions should be maintained as well.

...

http://www.corporatecomplianceinsights.com/cryptocurrency-challenges-opportunities/

How to help your organization plan for and respond to weather emergencies

By Glen Denny, Baron Services, Inc.

Hospitals, campuses, and emergency management offices should all be actively preparing for winter weather so they can be ready to respond to emergencies. Weather across the country is varied and ever-changing, but each region has specific weather threats that are common to their area. Understanding these common weather patterns and preparing for them in advance is an essential element of an emergency preparedness plan. For each weather event, those responsible for organizational safety should know and understand these four important factors: location, topography, timing, and pacing.

In addition, be sure to understand the important terms the National Weather Service (NWS) uses to describe changing weather conditions. Finally, develop and communicate a plan for preparing for and responding to winter weather emergencies. Following the simple steps in the sample planning tool provided will aid you in building an action plan for specific weather emergency types.

Location determines the type, frequency and severity of winter weather

Denny1The type of winter weather experienced by a region depends in great part on its location, including proximity to the equator, bodies of water, mountains, and forests. These factors can shape the behavior of winter weather in a region, determining its type, frequency, and severity. Knowing how weather affects a region can be the difference in the number of lives saved and lives lost.

Winter weather can have a huge impact on a region’s economy. For example, in the first quarter of 2015, insurance claims for winter storm damage totaled $2.3 billion, according to the Insurance Information Institute, a New York-based industry association. One Boston-area insurance executive called it the worst first quarter of winter weather claim experience he’d ever seen. The statistics, quoted in an article that appeared in the Boston Globe, noted that most claims were concentrated in the Northeast, where winter storms had dumped 9 feet of snow in Greater Boston. According to the article, Mounting insurance claims are remnants of a savage winter, “That volume of claims was above longtime historic averages, and coupled with the recent more severe winters could prompt many insurance companies to eventually pass the costs on to consumers through higher rates.”

Denny2Every region has unique winter weather, and different ways to mitigate the damage. Northern regions will usually have some form of winter precipitation – but they also have the infrastructure to handle it. In these areas, there is more of a risk that mild events can become more dangerous because people are somewhat desensitized to winter weather. Sometimes, they ignore warnings and travel on the roads anyway. Planners should remember to issue continual reminders of just how dangerous winter conditions can be.

Areas of the Southwest are susceptible to mountain snows and extreme cold temperatures. These areas need warming shelters and road crews to deal with snow and ice events when they occur.

Denny3Any winter event in the Southeast can potentially become an extreme event, because organizations in this area do not typically have many resources to deal with it. It takes more time to put road crews in place, close schools, and shut down travel. There is also an increased risk for hypothermia, because people are not as aware of the potential dangers cold temperatures can bring. Severe storms and tornadoes can also happen during the winter season in the Southeast.

Figure 1 is a regional map of the United States. Table 1 outlines the major winter weather issues each region should consider and plan for.

Topography influences winter weather

Denny4Topography includes cities, rivers, and mountains Topographical features influence winter weather, because they help direct air flow causing air to rise, fall, and change temperature. Wide open spaces – like those found in the Central U.S. – will increase wind issues.

Timing has a major effect on winter weather safety

Denny5Knowing when a winter event will strike is one of the safety official’s greatest assets because it enables a degree of advance warning and planning. But even with early notification, dangerous road conditions that strike during rush hour traffic can be a nightmare. Snowstorms that struck Atlanta, GA and Birmingham, AL a few years ago occurred in the middle of the day without adequate warning or preparation and caused travel-related problems.

Pacing of an event is important – the speed with which it occurs can have adverse impacts

Denny6Storms that occur in a few hours can frequently catch people off guard and without appropriate preparation or advanced planning. In some regions, like the Northeast, people are so immune to winter weather that they ignore the slower, milder events. Many people think it is fine to be out on the roads with a little snowfall, but it will accumulate over time. It is not long before they are stranded on snowy or icy roads.

Denny7As part of considering winter event pacing, emergency planners should become familiar with the terms the National Weather Service (NWS) currently uses to describe winter weather phenomenon (snow, sleet, ice, wind chill) that affect public safety, transportation, and/or commerce. Note that for all advisories designated as a “warning,” travel will become difficult or impossible in some situations. For these circumstances, planners should urge people to delay travel plans until conditions improve.

A brief overview of NWS definitions appears on Table 2. For more detailed information, go to https://www.weather.gov/lwx/WarningsDefined.

Planning for winter storms

After hurricanes and tornadoes, severe winter storms are the “third-largest cause of insured catastrophic losses,” according to Dr. Robert Hartwig, immediate past president of the Insurance Information Institute (III), who was quoted in Property Casualty 360° online publication. “Most winters, losses from snow, ice and other freezing hazards total approximately $1.2 billion, but some storms can easily exceed that average.”

Given these figures, organizations should take every opportunity to proactively plan. Prepare your organization for winter weather. Have a defined plan and communicate it to all staff. The plan should include who is responsible for monitoring the weather, what information is shared and how. Identify the impact to the organization and show how you will maintain your facility, support your customers, and protect your staff.

Denny8Once you have a plan, be sure to practice it just as you would for any other crisis plan. Communicate the plan to others in the supply chain and transportation partners. Make sure your generator tank is filled and ready for service.

Denny9Implement your plan and be sure to review and revise it based on how events unfold and feedback from those involved.

Denny10A variety of tools are available to help prepare action plans for weather events. The following three figures are tools Baron developed for building action plans for various winter weather events.

Use these tools to determine the situation’s threat level, then adopt actions suggested for moderate and severe threats – and develop additional actions based on your own situation.

Weather technology assists in planning for winter events

A crucial part of planning for winter weather is the availability of reliable and detailed weather information to understand how the four factors cited affect the particular event. For example, Baron Threat Net provides mapping that includes local bodies of water and rivers along with street level mapping. Threat Net also provides weather pattern trends and expected arrival times along with their expected impact on specific areas. This includes 48-hour models of temperature, wind speed, accumulated snow, and accumulated precipitation. In addition to Threat Net, the Baron API weather solution can be used by organizations that need weather integrated into their own products and services.

To assist with the pacing evaluation, proximity alerts can forecast an approaching wintery mix and snow, and can be used along with NWS advisories. While these advisories are critical, the storm or event has to reach the NWS threshold for a severe weather event. By contrast, technology like proximity alerting is helpful – just because an event does not reach a NWS defined threshold does not mean it is not dangerous! Pinpoint alerting capabilities can alert organizations when dangerous storms are approaching. Current conditions road weather information covers flooded, slippery, icy, and snow covered conditions. The information can be viewed on multiple fixed and mobile devices at one time, including an operation center display, desktop display, mobile phone, and tablet.

An example is a Nor’easter storm that occurred in February 2017 along the east coast. The Baron forecasting model was accurate and consistent in the placement of the heavy precipitation, including the rain/snowfall line leading up to the event and throughout the storm. Models also accurately predicted the heaviest bands of snow, snow accumulation, and wind speed. Based on the radar image showing the rain to snow line slowly moving to the east the road conditions product displayed a brief spatial window where once the snow fell, roads were still wet for a very short time before becoming snow-covered, which is evident in central and southern NJ and southeastern RI.

Final thoughts on planning for winter weather

Denny11Every region within the United States will experience winter weather differently. The key is knowing what you are up against and how you can best respond. Considering the four key factors – location, topography, timing, and pacing – will help your organization plan and respond proactively.

Atkins Unbottling VolnerabilitiesGraphic2By Ed Beadenkopf, PE

As we view with horror the devastation wrought by recent hurricanes in Florida, South Texas, and the Caribbean, questions are rightly being asked about what city planners and government agencies can do to better prepare communities for natural disasters. The ability to plan and design infrastructure that provides protection against natural disasters is obviously a primary concern of states and municipalities. Likewise, federal agencies such as the Federal Emergency Management Agency (FEMA), the U.S. Army Corps of Engineers (USACE), and the U.S. Bureau of Reclamation cite upgrading aging water infrastructure as a critical priority.

Funding poses a challenge

Addressing water infrastructure assets is a major challenge for all levels of government. While cities and municipalities are best suited to plan individual projects in their communities, they do not have the funding and resources to address infrastructure issues on their own. Meanwhile, FEMA, USACE and other federal agencies are tasked with broad, complex missions, of which flood management and resiliency is one component.

Federal funding for resiliency projects is provided in segments, which inadvertently prevents communities from being able to address the projects entirely. Instead, funding must be divided into smaller projects that never address the entire issue. To make matters even more challenging, recent reports indicate that the White House plan for infrastructure investment will require leveraging a major percentage of funding from state and local governments and the private sector. 

Virtually, long-term planning is the solution

So, what’s the answer? How can we piece together an integrated approach between federal and local governments with segmented funding? Put simply, we need effective, long-term planning.

Cities can begin by planning smaller projects that can be integrated into the larger, federal resilience plan. Local governments can address funding as a parallel activity to their master planning. Comprehensive planning tools, such as the Atkins-designed City Simulator, can be used to stress test proposed resilience-focused master plans.

A master plan developed using the City Simulator technology is a smart document that addresses the impact of growth on job creation, water conservation, habitat preservation, transportation improvements, and waterway maintenance. It enables local governments to be the catalyst for high-impact planning on a smaller scale.

By simulating a virtual version of a city growing and being hit by climate change-influenced disasters, City Simulator measures the real impacts and effectiveness of proposed solutions and can help lead the way in selecting the improvement projects with the highest return on investment (ROI). The resulting forecasts of ROIs greatly improve a community’s chance of receiving federal funds.

Setting priorities helps with budgeting

While understanding the effectiveness of resiliency projects is critical, communities must also know how much resiliency they can afford. For cities and localities prone to flooding, a single resiliency asset can cost tens of millions of dollars, the maintenance of which could exhaust an entire capital improvement budget if planners let it. Using effective cost forecasting and schedule optimization tools that look at the long-term condition of existing assets, can help planners prioritize critical projects that require maintenance or replacement, while knowing exactly the impact these projects will have on local budgets and whether additional funding will be necessary.

It is imperative to structure a funding solution that can address these critical projects before they become recovery issues. Determining which communities are affected by the project is key to planning how to distribute equitable responsibility for the necessary funds to initiate the project. Once the beneficiaries of the project are identified, local governments can propose tailored funding options such as Special Purpose Local Option Sales Tax, impact fees, grants, and enterprise funds. The local funding can be used to leverage additional funds through bond financing, or to entice public-private partnership solutions, potentially with federal involvement.

Including flood resiliency in long-term infrastructure planning creates benefits for the community that go beyond flood prevention, while embracing master planning has the potential to impact all aspects of a community’s growth. Local efforts of this kind become part of a larger national resiliency strategy that goes beyond a single community, resulting in better prepared cities and a better prepared nation.

Atkins Beadenkopf EdEd Beadenkopf, PE, is a senior project director in SNC-Lavalin’s Atkins business with more than 40 years of engineering experience in water resources program development and project management. He has served as a subject matter expert for the Federal Emergency Management Agency, supporting dam and levee safety programs.

There’s a crack in California. It stretches for 800 miles, from the Salton Sea in the south, to Cape Mendocino in the north. It runs through vineyards and subway stations, power lines and water mains. Millions live and work alongside the crack, many passing over it (966 roads cross the line) every day. For most, it warrants hardly a thought. Yet in an instant, that crack, the San Andreas fault line, could ruin lives and cripple the national economy.

In one scenario produced by the United States Geological Survey, researchers found that a big quake along the San Andreas could kill 1,800 people, injure 55,000 and wreak $200 million in damage. It could take years, nearly a decade, for California to recover.

On the bright side, during the process of building and maintaining all that infrastructure that crosses the fault, geologists have gotten an up-close and personal look at it over the past several decades, contributing to a growing and extensive body of work. While the future remains uncertain (no one can predict when an earthquake will strike) people living near the fault are better prepared than they have ever been before.

...

https://www.popsci.com/extreme-science-san-andreas

Sunday, 25 February 2018 13:35

Extreme Science: The San Andreas Fault

Damage to reputation or brand, cyber crime, political risk and terrorism are some of the risks that private and public organizations of all types and sizes around the world must face with increasing frequency. The latest version of ISO 31000 has just been unveiled to help manage the uncertainty.

Risk enters every decision in life, but clearly some decisions need a structured approach. For example, a senior executive or government official may need to make risk judgements associated with very complex situations. Dealing with risk is part of governance and leadership, and is fundamental to how an organization is managed at all levels.

Yesterday’s risk management practices are no longer adequate to deal with today’s threats and they need to evolve. These considerations were at the heart of the revision of ISO 31000, Risk management – Guidelines, whose latest version has just been published. ISO 31000:2018 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions. Following are the main changes since the previous edition:

...

https://www.iso.org/news/ref2263.html

Thursday, 15 February 2018 15:54

The new ISO 31000 keeps risk management simple

Some things are hard to predict. And others are unlikely. In business, as in life, both can happen at the same time, catching us off guard. The consequences can cause major disruption, which makes proper planning, through business continuity management, an essential tool for businesses that want to go the distance.

The Millennium brought two nice examples, both of the unpredictable and the improbable. For a start, it was a century leap year. This was entirely predictable (it occurs any time the year is cleanly divisible by 400). But it’s also very unlikely, from a probability perspective: in fact, it’s only happened once before (in 1600, less than 20 years after the Gregorian calendar was introduced).

A much less predictable event in 2000 happened in a second-hand bookstore in the far north of rural England. When the owner of Barter Books discovered an obscure war-time public-information poster, it triggered a global phenomenon. Although it took more than a decade to peak, just five words spawned one of the most copied cultural memes ever: Keep Calm and Carry On.

...

https://www.iso.org/news/ref2240.html

Mahoning County is located on the eastern edge of Ohio at the border with Pennsylvania. It has a total area of 425 square miles, and as of the 2010 census, its population was 238,823. The county seat is Youngstown.

Challenges

  • Eliminate application slowdowns caused by backups spilling over into the workday
  • Automate remaining county offices that were still paper-based
  • Extend use of data-intensive line-of-business applications such as GIS

...

https://www.riverbed.com/customer-stories/mahoning-county-ohio.html