DRJ Fall 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 32, Issue 3

Full Contents Now Available!

Industry Hot News

Industry Hot News (61)

National Preparedness Month (NPM), is recognized each September to promote family and community disaster and emergency planning now and throughout the year. The 2019 theme is Prepared, Not Scared. Be Ready for Disasters   

2019 Weekly Themes

  • Week 1: Sept 1-7                      Save Early for Disaster Costs

  • Week 2: Sept 8-14                    Make a Plan to Prepare for Disasters

  • Week 3: Sept 15-21                  Teach Youth to Prepare for Disasters

  • Week 4: Sept 22-30                  Get Involved in Your Community’s Preparedness

Hashtags

  • #NatlPrep
  • #PrepareNow
  • #FloodSmart
  • #YouthPrep
  • #ReadyKids

Graphics, Videos, and Related links

For more engaging content, attach graphics that are sized appropriately for specific social media posts (i.e., Twitter & Facebook). 

National Preparedness Month Congressional Co-Chairs

Social Media Content

Week 1:  Save Early for Disaster Costs  

Web Resources

Social Media Posts

Week 2: Make a Plan

Web Resources

Social Media Posts

  • Be Prepared. Make an emergency plan today & practice it: www.ready.gov/plan #PrepareNow #NatlPrep

  • Preparing your family for an emergency is as simple as a conversation over dinner. Get started with tips from @Readygov: ready.gov/plan #PrepareNow #NatlPrep

  • It’s important to include kids in the disaster planning process. Review your family emergency plan together so that they know what to do even if you are not there: ready.gov/kids #YouthPrep #PrepareNow #NatlPrep

  • Practice your fire escape plan by having a home fire drill at least twice a year with everyone in the home. #PrepareNow #NatlPrep

  • Download a group texting app so your entire circle of family and friends can keep in touch before, during & after an emergency. #NatlPrep #PrepareNow

  • Practice evacuating in the car with your animals, so they’re more familiar if you need to evacuate in an emergency. #NatlPrep #PrepareNow

  • Be prepared. Get the @fema app with weather alerts for up to 5 locations, plus disaster resources and safety tips: fema.gov/mobile-app #NatlPrep #PrepareNow.

  • Contact your water and power companies to get on a “priority reconnection service” list of power-dependent customers if you rely on electrical medical equipment. #PrepareNow

  • Learn how to turn off utilities like natural gas in your home. ready.gov/safety-skills #PrepareNow #NatlPrep

  • Be prepared for a power outage by having enough food, water, & meds to last for at least 72 hours: ready.gov/kit #PrepareNow

Week 3: Youth Preparedness

Web Resources

Social Media Posts

  • Teach children what to do in an emergency if they are at home or away from home. ready.gov/kids#PrepareNow #NatlPrep #YouthPrep

  • Help your kids know how to communicate during an emergency. Review these topics with them: Sending text message; Emergency contact numbers; Dialing 9-1-1 for help ready.gov/kids #PrepareNow #NatlPrep #YouthPrep

  • Update school records and discuss emergency contact numbers with kids before they go: ready.gov/make-a-plan  #BackToSchool #YouthPrep

  • Add your kids’ school’s social media info to the family communication plan: ready.gov/kids/make-a-plan#YouthPrep #ReadyKids

  • Review your family emergency communications plan with kids at your next household meeting. #YouthPrep #ReadyKids

  • Include your child's medication or supplies in your family’s emergency kit. More tips visit: ready.gov/kit#YouthPrep #ReadyKids

  • Include your child's favorite stuffed animals, board games, books or music in their emergency kit to comfort them in a disaster. #YouthPrep

  • Get the kids involved in building their own emergency kit: www.ready.gov/kids/build-a-kit  #YouthPrep #ReadyKids

  • Kids can #BeAForce... by playing the online emergency preparedness "Build a Kit" game: www.ready.gov/kids/games #YouthPrep #ReadyKids

  • Speak Up! Ask your child’s teacher about the plans the school has in place for emergencies. #BacktoSchool #YouthPrep www.healthychildren.org/English/safety-prevention/all-around/Pages/Actions-Schools-Are-Taking-to-Make-Themselves-Safer.aspx

  • Your kids can become Disaster Masters with this @Readygov preparedness game: www.ready.gov/kids/games #YouthPrep

  • Are your students prepared for an emergency? Download curriculum for grades 1-12 for your classroom: www.ready.gov/kids/educators #YouthPrep

Week 4: Get Involved in Your Community’s Preparedness

Web Resources

Social Media Posts

  • Community Emergency Response Teams (CERTs) trains volunteers to prepare for the types of disasters that their community may face. Find your local CERT: https://community.fema.gov/Register/Register_Search_Programs #NatlPrep
  • Learn about the hazards most likely to affect your community and their appropriate responses. #NatlPrep #PrepareNow
  • Every community has voluntary organizations that work during disasters. Visit https://www.nvoad.org to see what organizations are active in your community. #NatlPrep
  • Encourage students to join Teen CERT so they can respond during emergencies. Learn more: www.fema.gov/media-library/assets/documents/28048 #YouthPrep
  • Your community needs YOU! Find youth volunteer and training opportunities to help your community here: www.ready.gov/youth-preparedness #YouthPrep #NatlPrep
  • Finding support from friends, family, and community organizations can help kids cope with #disasters. #YouthPrep
  • Take classes in lifesaving skills, such as CPR/AED and first aid, or in emergency response, such as CERT. #PrepareNow #NatlPrep
  • Check in with neighbors to see how you can help each other out before and after a storm #HurricanePrep
  • If you have a disability, plan ahead for accessible transportation that you may need for evacuation or getting to a medical clinic. Work with local services, public transportation or paratransit to identify accessible transportation options. ready.gov/individuals-access-functional-needs #NatlPrep

  • If you have a disability contact your city or county government’s emergency management agency or office. Many keep lists of people with disabilities so they can be helped quickly in a sudden emergency. ready.gov/individuals-access-functional-needs #NatlPrep

Tuesday, 06 August 2019 15:58

Update on 2019 National Preparedness Month

By Dave Bermingham, Technical Evangelist at SIOS Technology

High availability and disaster recovery protections both require redundant resources configured to minimize or eliminate single points of failure. Because failures sometimes occur on a large scale, a best practice is to put some geographical distance between some of these resources. Amazon Web Services meets this need by offering multiple Availability Zones and Regions to facilitate business continuity during all likely failures—from a single server crashing to a widespread natural disaster.

This article provides practical guidance to help database and system administrators tasked with protecting SQL Server databases running in the AWS cloud. The high availability (HA) and disaster recovery (DR) provisions available with the AWS cloud and the SQL Server software are covered first in separate sections. This is followed by a third section outlining how these provisions can be used in a cost-effective configuration that combines HA and DR protections in a failover cluster spanning multiple AWS Availability Zones and Regions.

Multiple Availability-Zones and Regions in the AWS Cloud

Fully protecting applications, including those with SQL Server databases, from all possible outages requires recognizing the differences between “failures” and “disasters” because those differences determine the different provisions needed for HA and DR. Failures are short in duration and small in scale, affecting a server, rack, or the power or cooling in a datacenter. Disasters have more widespread and enduring impacts, affecting multiple facilities, including offices and datacenters alike, in ways that preclude rapid localized recovery.

The most consequential difference involves the location of the redundant resources (systems, software and data), which can be local—on a Local Area Network—for recovering from a localized failure. By contrast, the redundant resources required to recover from a widespread disaster must span a Wide Area Network. For database applications that require high transactional throughput performance, the ability to replicate the active instance’s data synchronously across the LAN enables the standby instance to be “hot” and ready to take over immediately and automatically in the event of a failure. Such rapid response should be the goal of all HA provisions.

Because latency inherent in the WAN would adversely impact on the throughput performance in the active instance when using synchronous replication, data is usually replicated asynchronously in DR configurations. This means that updates being made to the standby instance always lag behind updates being made to the active instance, which makes the standby instance “warm” and results in an unavoidable delay during the manual recovery process.

AWS Availability Zones (AZs) offer the best of both by combining the synchronous replication available on a LAN with some geographical separation previously possible only in the WAN. AZs connect multiple datacenters within an AWS region via a low latency, high throughput network that facilitates synchronous commit with negligible impact on database performance. In many regions, the latency across AZs is less than one millisecond, which has made the use of multi-zone configurations a new best practice for HA failover clusters.

For additional protection against major disasters that could affect multiple Availability Zones, AWS operates multiple Regions throughout the world. Amazon employs encrypted Virtual Private Cloud (VPC) peering among Regions to deliver highly reliable and secure communications. As expected, replicating data across AWS Regions will need to be done asynchronously for SQL Server databases, and to ensure minimal or no data loss, the recovery will need to be performed manually. The resulting delay in DR provisions is tolerable, however, because Region-wide disasters are rare.

SQL Server’s Always On Availability Groups and Failover Cluster Instances

SQL Server offers two of its own options for HA and DR protections: Failover Cluster Instances (FCIs) and Always On Availability Groups. FCIs have two notable advantages: The feature is included in the less expensive Standard Edition; and they protect the entire SQL Server instance, including user and system databases. A major disadvantage is the requirement Windows Server Failover Clustering (WSFC) has for shared storage, such as a storage area network (SAN), as a means to replicate (or actually share) data between the active and standby instances. The problem is: Shared storage has not historically been available in the AWS cloud, or in any other public cloud.

The lack of shared storage in the cloud was addressed in the Datacenter Edition of Windows Server 2016 with Storage Spaces Direct (S2D), which also received concurrent support in SQL Server 2016. S2D is software-defined storage that creates a virtual SAN, enabling data to be shared between multiple instances. S2D requires that the servers reside within a single datacenter, however, making it incompatible with Availability Zones. For this reason, using FCI for HA and/or DR protections across multiple AWS AZs and Regions requires using a third-party solution for data replication.

The other SQL Server option is Always On Availability Groups. This option is more capable than FCIs for both HA and DR, and it possesses some other notable advantages, such as readable secondaries (with appropriate licensing) and no restrictions on the size of databases. But it requires licensing the more expensive Enterprise Edition, and that makes this option cost-prohibitive for many database applications. Another limitation is that only the user database is replicated, creating the need for separate provisions to protect the entire SQL Server instance.

Using an application-specific HA/DR solution like Always On Availability Groups has another disadvantage: Separate HA and/or DR provisions will be needed to protect all other applications, including those using a different database. Having multiple HA/DR solutions can substantially increase complexity and costs for licensing, training, implementation and ongoing operations. This is yet another reason why both database and system administrators increasingly prefer to use general-purpose failover clustering solutions.

Consolidating HA and DR Protections in a SANless Failover Cluster

The lack of shared storage in the cloud has long been addressed by third-party failover clustering solutions purpose-built for HA and DR protections in private, public and hybrid cloud environments. These solutions are implemented entirely in software to enable creating, as their designation implies, a cluster of servers and storage—sans SANs—and with rapid, automatic failover to assure high availability at the application level.

Versions for Windows Server are designed to work seamlessly with WSFC by providing real-time block-level data replication both on-premises and in a cloud-based SANless environment. A major advantage with SQL Server is support for FCIs without imposing any need to compromise availability or performance. These solutions usually overcome another limitation, this one imposed by the Standard Edition of SQL Server, of being able to configure only two FCI nodes in a failover cluster. As will be shown in the example below, the ability to have a two-node cluster spanning Availability Zones, along with a third instance in a different Region, affords mission-critical HA/DR protections in a single configuration.

Versions for Linux, which lacks a fundamental clustering capability equivalent to WSFC, must provide a total HA/DR solution that includes data replication, continuous application-level monitoring and configurable failover/failback recovery policies. Linux is becoming increasingly popular for SQL Server databases and other enterprise applications, and third-party failover clustering solutions now make configuring HA/DR protections nearly as easy as it is for Windows Server. Without such a solution, administrators would be forced to struggle making open source software work dependably in full, application-specific HA/DR stacks. It is for this reason that only the very largest organizations have the wherewithal (skill set and staffing) needed to even consider taking on such ongoing efforts.

While specific to the operating system, most failover clustering software is application-agnostic, enabling administrators to have a single, universal HA/DR solution. Most such solutions also offer a variety of value-added capabilities. Examples include data compression and other forms of WAN optimization to reduce bandwidth utilization in multi-region clusters, minimalist “warm” standby configurations that also reduce costs, and manual switchover of active and standby instances to facilitate planned maintenance and routine backups with minimal disruption to the applications.

“Undersizing” standby instances can afford considerable savings. Because the standby instance rarely runs a production workload, it is possible to reduce costs by allocating minimal resources (e.g. CPU, memory and network bandwidth) while it functions in its normal standby mode. The tradeoff is that, in the event a failover, the allocation will need to be resized before the instance can become the active node. This extra step adds to the recovery time because it requires a reboot. There are other factors to consider, as well, such as I/O requirements and the storage limitations of smaller instance types. But when viable, the cost saving can be significant.

Additional savings is afforded by compressing the data that transverses the WAN, especially in hybrid cloud configurations. The higher the compression, the higher the CPU utilization, so some tweaking is usually needed to achieve the optimal balance.

The diagram shows a popular AWS configuration that provides both HA and DR protections in a VPC that distributes three SQL Server instances across multiple Availability Zones and Regions. For clusters spanning multiple Availability Zones within a single AWS Region, the data replication is synchronous, enabling rapid automatic failovers from all localized failures. For clusters spanning multiple AWS Regions, the data replication must be asynchronous to avoid adversely impacting on throughput performance, and failovers will need to employ manual processes to minimize the potential for data loss.

SIOS AWS Multi ZoneRegion 190726

This popular SANless failover cluster configuration consists of a two-node HA cluster spanning two AWS Availability Zones, along with a third instance deployed in a separate AWS Region to facilitate a full recovery after a widespread disaster.

It is also possible to have two- and three-node configurations in a hybrid cloud environment for HA and/or DR purposes. One such three-node configuration is a two-node HA cluster located in an enterprise datacenter with a third instance located in the AWS cloud for DR protection—or vice versa.

Confidence in the AWS Cloud

As of this writing, AWS has 61 Availability Zones deployed in 20 Regions, making the AWS Global Infrastructure eminently capable of providing carrier-class HA/DR protection for SQL Server databases. But with a purpose-built failover clustering solution, such carrier-class high availability need not mean paying a carrier-like high cost. Because SANless failover clustering software makes effective and efficient use of all AWS compute, storage and networking resources, while also being easy to implement and operate, these solutions minimize ongoing costs, resulting in robust HA and DR protections being more affordable than ever before.

The security, agility, scalability and high availability made possible by overlaying SANless failover clusters atop multiple, geographically-dispersed Availability Zones and Regions should give even the most risk-adverse administrators the confidence needed to migrate mission-critical SQL Server databases and other applications to the AWS cloud.

About the Author

David Bermingham is Technical Evangelist at SIOS Technology. He is recognized within the technology community as a high-availability expert and has been honored to be elected a Microsoft MVP for the past 8 years: 6 years as a Cluster MVP and 2 years as a Cloud and Datacenter Management MVP. David holds numerous technical certifications and has more than thirty years of IT experience, including in finance, healthcare and education.

By OSCAR MUNOZ

When a business faces technological disruptions or natural disasters such as floods, earthquakes, fires and tornadoes, no one anticipates it. These types of events occur when least expected and can result in either a major loss of personnel, workplaces, dependencies, revenue or potentially all. For instance, let’s consider a huge fire that endangers the workplace inflicting structural damage to all nearby buildings. If employees are traveling to the affected worksites during this time, it may not be safe for them. So where can the employees go if the affected worksite(s) are unavailable due to the structural damage caused by the disaster? How can employees continue to perform there day to day activities and minimizing downtime?

In the event of a disaster, it is important for employers to be prepared and employees to be made aware of the potential dangers and, provided with instructions on where to work from if their worksite(s) are unavailable. It is ultimately the responsibility of leadership to ensure the safety of their employees and their ability to continue to run their business. 

In today’s world, earthquakes, floods, and civil unrest matters occur more frequently and create havoc for businesses, costing them millions of dollars in revenue and in some worst-case scenarios leading them to go out of business. Following a disaster, almost 90% of smaller companies fail within a year unless they can resume within 5 days. To minimize the severity of impact, 20% of larger companies will spend over 10 days per month on their continuity plans. (FEMA) 

Develop A Business Continuity Program

By designing a Business Continuity (BC) Program, you can increase the chances of your business surviving unforeseen disruptions. There are best practices and standards (DRJ & ISO22301) to help establish and outline the criteria for a Business Continuity, Disaster Recovery and Emergency Management program. These standards are meant to guide organizations and promote a shared understanding of the fundamentals of Business Resiliency. 

Before developing a BCP, there are preliminary steps to be considered. Seeking professional Business Continuity consulting is a good start to obtain the best results for the program. A Business Continuity consultant should be able to walk a client step by step through how to design the right program for their business. They should also be able to provide the framework, tools and training for the program to be successful. A Business Continuity program should be consistent with the organization’s mission, management policy, financial commitments, and be assessed and improved on an annual basis.

What Your Business Continuity Plans Should Look Like

Once the Business Continuity Program is up and running, the end goal is to have well documented and tested Business Continuity plans. This will ensure your employees understand their roles during a disruptive event and the business can quickly become operational again. The plans should include an outline of recovery strategies and who should be responsible for specific tasks to assist in implementation when the disruptive event takes place. The Business Continuity Plan should be assessed regularly to meet compliance requirements by conducting periodic reviews, testing, and evaluating post-incident reports and overall improving the program.

Your Business Continuity Plan should include the following

  • Appropriate recovery strategies for a variety of loss scenarios
  • Mitigation plan that establishes interim and long-term actions to minimize downtime during recovery
  • Short-term and long-term strategies that address processes, staff, and acceptable time frames for the restoration of services, facilities, programs, and technology.
  • Documented critical and time-sensitive applications recovery procedures, vital records, processes, and functions that have a critical impact to your business if unavailable.
  • Call tree or notification procedure to activate the plan
  • List of recovery team members with detailed contact information who can carry out specific tasks during recovery

Be Prepared

All businesses are subject to unplanned and inevitable disasters that pose potential threats. Having a well-documented & exercised Business Continuity Program can mean the difference between a business that is resilient and a business that fails when disaster strikes. Be prepared by reviewing your business continuity plan now.

OscarMunozOscar Munoz is a Business Continuity Consultant at Virtual Corporation who has a combination of 6 years in IT, Business Continuity, Vendor Management & Business Analytics. He brings a deep understanding of IT and Business Resiliency and carries a broad set of skills that crosses technical, business risk, program, and vendor management. Oscar can conduct Business Impact Analysis, Business Continuity Planning and exercise validation on processes & regulations on all layers of an organization, including analyzing & implementing solutions to meet regulatory requirements and management of disasters that lead to business disruptions.

Interested in learning more about how to construct or revise your business continuity plans? Contact Virtual Corporation today for business continuity and organizational resilience solutions.
https://www.virtual-corp.com/consulting-services/
Tuesday, 23 July 2019 19:58

The Value of a BC Program

Originally appeared on the DCIG blog.

 

By 

As more organizations embrace a cloud-first model, everything in their IT infrastructure comes under scrutiny, to include backup and recovery. A critical examination of this component of their infrastructure often prompts them to identify their primary objectives for recovery. In this area, they ultimately want simplified application recoveries that meet their recovery point and time objectives. To deliver this improved recovery experience, organizations may now turn to a new generation of disaster-recovery-as-a-service (DRaaS) offerings.

A Laundry List of DRaaS’ Past Shortcomings

DRaaS may not be the first solution that comes to mind to improve their recovery experience. They may not even believe DRaaS solutions can address their recovery challenges. Instead, DRaaS may imply that organizations must first:

  1. Figure out how to pay for it
  2. Accept there is no certainty of success
  3. Do an in-depth evaluation of their IT infrastructure and applications
  4. Re-create their environment at a DR site
  5. Perform time consuming tests to prove DRaaS works
  6. Dedicate IT staff for days or weeks to gather information and perform DR tests

This perception about DRaaS may have held true at some level in the past. However, any organizations that still adhere to this view need to take a fresh view of how DRaaS providers now deliver their solutions.

The Evolution of DRaaS Providers

DRaaS providers have evolved in four principal ways to take the pain out of DRaaS and deliver the simplified recovery experiences that organizations seek.

1. They recognize recovery experiences are not all or nothing events.

In other words, DRaaS providers now make provisions in their solutions to do partial on-premises recoveries. In the past, organizations may have only called upon DRaaS providers when they needed a complete off-site DR of all applications. While some DRaaS providers still operate that way, that no longer applies to all of them.

Now organizations may call upon a DRaaS provider to help with recoveries even when they experience just a partial outage. This application recovery may occur on an on-premises backup appliance provided by the DRaaS provider as part of its offering.

2. They use clouds to host recoveries.

Some DRaaS providers may still make physical hosts available for some application recoveries. However, most make use of purpose-built or general-purpose clouds for application recoveries. DRaaS providers use these cloud resources to host an organization’s applications to perform DR testing or a real DR. Once completed, they can re-purpose the cloud resources for DR and DR testing for other organizations.

3. They gather the needed information for recovery and build out the templates needed for recovery.

Knowing what information to gather and then using that data to recreate a DR site can be a painstaking and lengthy process. While DRaaS providers have not eliminated this task, they shorten the time and effort required to do it. They know the right questions to ask and data to gather to ensure they can recover your environment at their site. Using this data, they build templates that they can use to programmatically recreate your IT environment in their cloud.

4. They can perform most or all the DR on your behalf.

When a disaster strikes, the stress meter for IT staff goes straight through the roof. This stems from, in part, few, if any of them have ever been called upon to do a DR. As a result, they have no practical experience in performing one.

In response to this common shortfall, a growing number of DRaaS providers perform the entire DR, or minimally assist with it. Once they have recovered the applications, they turn control of the applications over to the company. At that point, the company may resume its production operations running in the DRaaS provider’s site.

DRaaS Providers Come of Age

Organizations should have a healthy fear of disasters and the challenge that they present for recovery. To pretend that disasters never happen ignores the realities that those in Southern California and Louisiana may face right now. Disasters do occur and organizations must prepare to respond.

DRaaS providers now provide a means for organizations to implement viable DR plans. They provide organizations with the means to recover on-premises or off-site and can do the DR on their behalf. Currently, small and midsize organizations remain the best fit for today’s DRaaS providers. However, today’s DRaaS solutions foreshadow what should become available in the next 5-10 years for large enterprises as well.

Thursday, 11 July 2019 21:22

DRaaS Providers Come of Age

Cyber threats are simply a business reality in the modern age, but with the right knowledge and tools, we can protect our businesses, employees and customers. Davis Malm’s Robert Munnelly outlines five actions companies can take to maximize long-term cyber safety.

Decades of experience in the age of broadband and security breaches has taught us important lessons about the steps companies should take to protect themselves, employees and customers from cybersecurity threats. Every company should make an effort to adopt specific action items so as to maximize opportunities for long-term cyber safety in this increasingly interconnected world.

Following are five actions companies must take to prepare.

...

https://www.corporatecomplianceinsights.com/cyber-safety-minimize-risk/

By BRITT LEWIS

Senior Vice President, Direct Sales and Business Development, Inmarsat Government Inc.

Seeing a disaster unfold on television or online triggers many emotions. It can, on occasions, be very difficult to watch the images being broadcast. Yet to arrive in a devastated region in-person as a first responder? The impact can be beyond description. They are surrounded by victims who need medical attention, need food and water, and are desperate to find and connect with their loved ones.

In providing relief, first responders must focus on these victims, without worrying about whether they are able to communicate with commanders at another site or send damage-related video and data to them. Nor should they be expected to have a detailed mastery of how a communication system works. The mission is about assistance and relief, not connectivity set-up. However, in any disaster, reliable communications is of critical importance.

In normal circumstances, we consider cell phone coverage as ubiquitous – a given. Yet, that is not always the case at a disaster scene, where commercial networks may be overloaded or sustain damage. Access to reliable, easy-to-install-and-operate communications amid such circumstances can, on first examination, appear very difficult to achieve.

But FirstNet, America’s dedicated public safety broadband communications platform, is changing that. It’s being built with AT&T in a public-private partnership with the First Responder Network Authority – an independent government authority. Since its launch, FirstNet has been reliably supporting public safety’s response of emergency and everyday situations. Public safety agencies used FirstNet during last year’s wildfires and hurricanes as well as tornadoes and flooding events this year. And FirstNet has stood up to the challenge, keeping first responders connected and enabling them to communicate when other systems went down.

Satellite communications (SATCOM) are a critical part of the FirstNet communications portfolio, helping to deliver the capabilities that “First In/Last Out” responders depend upon in hard-hit disaster areas. Inmarsat Government is proud to be part of the core team AT&T selected to help deliver the FirstNet communication ecosystem, bringing resilient, highly secure SATCOM capabilities for our country’s first responders.

The FirstNet ecosystem strengthens public safety communications, enabling coordination more quickly and effectively in disasters and emergencies. FirstNet users can leverage narrowband and wideband SATCOM solutions, which have been a trusted, reliable choice for public safety agencies’ mission-critical communication needs for nearly half a century and should be part of any disaster response/Continuity of Operations (COOP) planning. Unlike traditional wireline or cellular wireless systems, SATCOM uses satellites to “bounce” voice or data signals to or from a remote user through the sky and back to one or more geographically resilient downlink facilities (“earth stations”), which are connected to the global communication backbone networks. This resiliency enables communications virtually anywhere, as users have a “line-of-sight” path through the air to the satellite.

Through SATCOM solutions, users acquire instant voice, data and video services, using equipment that is often as simple and easy to use as a cell phone, and small and light enough to store in a backpack. These solutions are often embedded in communication systems. SATCOM solutions have proven themselves – over and over again – as irreplaceable in delivering the following, unique capabilities anywhere in the world:

Augmented, constant connectivity. In assessing damage and casualties, responders must connect to the command and control center as well as restore communications for the local community. This requires high bandwidth availability for seamless voice, data, image and video transmissions for a variety of applications. With SATCOM, those running the command and control center operations, for example, can dynamically allocate voice and data resources to where they are needed and to do so in real time. They transfer live video streams from affected areas back to the center so that the command and control center can observe and advise.

SATCOM offers a sound option to first responders. It is a dependable – and often the only option – that augments “terrestrial” (LTE cellular or wireline) communications for enhanced, robust connectivity.

Highly reliable coverage. SATCOM offers ubiquitous satellite coverage no matter where first responders go. SATCOM services use satellites to reach any location on the planet. As such, satellite-based connectivity is unaffected by disasters/emergencies which may destroy local tower infrastructure and is accessible in the most remote or rural areas.

Flexible solutions. Solutions available to FirstNet users range from satellite phones for individual users to portable or vehicle-mounted solutions and fixed satellite capabilities. These fulfill a variety of public safety use case scenarios for SATCOM in remote areas, such as providing law enforcement officers, firefighters or emergency medical technicians (EMT) who operate in remote areas with a satellite phone for highly reliable voice services for emergencies. In addition, the solutions can equip first responder vehicles with dual LTE/SATCOM terminals to maintain constant or on-demand voice/data communications in rural areas. Boats or other maritime craft can be equipped with SATCOM units for operations offshore or over bodies of water where cellular coverage does not exist.

Easy setup/deployment. As indicated, public safety organizations and “First In/Last Out” responder units must focus on the mission at hand. SATCOM allows them to meet their immediate, key objectives through capabilities which involve minimum installation time; users are up and running within a few minutes. For example, first responders in disaster-prone areas often depend upon rapidly deployable “SATCOM go kits,” using satellite phones and/or SATCOM broadband terminals they can easily set up. They can deploy these man-portable or broadband voice/data satellite kits in under 10 minutes, to establish incident command outposts in remote areas for voice, video conferencing and data. By default, the kits link to LTE/cellular networks to create hotspots. Yet, they automatically switchover to satellite global broadband networks anytime local networks are unavailable. This means first responders stay connected during floods, power outages, forest fires and more, regardless of their location and situation.

On-the-move and on-the-pause responders depend upon Vehicular Network Solutions (VNS) which combine LTE and satellite for true “go anywhere” vehicle connectivity. Built specifically for FirstNet users, VNS utilizes cellular or satellite backhaul for in-vehicle communications and/or extend a Wi-Fi “bubble” of connectivity to a small number of users outside the vehicle. It combines an off-the-shelf In-Vehicle Router system with multiple communication input capability and the ability to intelligently select among connectivity paths.

This brings uninterrupted voice and data capability during the “first 60 minutes” after a disaster strikes, which could stretch to days and even weeks as recovery efforts continue, enabling essential communications and information sharing. This is when first responders turn to very small aperture terminals (VSAT), such as Inmarsat Global Xpress, to meet the expanding and increasing needs of their mission. Global Xpress is the first and only end-to-end commercial Ka-band network from a single operator available today. Only a Global Xpress terminal and standard monthly subscription are required to connect anywhere in the world at any time, and then transmit and receive large data, such as that from high-speed internet and video streaming. From the moment the transit case is opened, connectivity can be established in under seven minutes with minimal operator interaction. Once online, committed information rates with 99.5% availability pave the way for mission success. Customers also have single-source access to a U.S.-based network operations center that is certified and cleared and available 24/7/365 with just one phone call.

From the arrival of the first responders within hours of the crisis and then up to days later once the emergency response and relief mission expand, SATCOM has proven to be there when commercial infrastructure and mobile phone networks may be overloaded, damaged or non-existent. It helps ensure a “First In/Last Out” presence delivering immediate access that is easy-to-install and operate, with “anytime/anywhere” connectivity – until the mission is completed. Via FirstNet, SATCOM allows them to meet their immediate, key objectives through capabilities that help ensure connectivity no matter where they are, or what circumstances they face. Because these capabilities are highly secure and easy to set up – with readily available support at all times – responders may now perceive of high-bandwidth communication access as a given. With this, they can focus entirely on the task at hand, in providing support to the victims and communities they serve.

Staying ahead can feel impossible, but understanding that perfection is impossible can free you to make decisions about managing risk.
 

Every few years, there is a significant and often unexpected shift in the tactics that online criminals use to exploit us for profit. In the early 2000s, criminals ran roughshod through people's computers by exploiting simple buffer overflows and scripting flaws in email clients and using SQL injection attacks. That evolved into drive-by downloads through flaws in browsers and their clunky plug-ins. Late in the decade, criminals began employing social components, initially offering up fake antivirus products and then impersonating law enforcement agencies to trick us into paying imaginary fines and tickets. In 2013, someone got the bright idea to recycle an old trick at mass scale: ransomware.

...

https://www.darkreading.com/vulnerabilities---threats/in-cybercrimes-evolution-active-automated-attacks-are-the-latest-fad/a/d-id/1335073

When the only certainty is uncertainty, the IEC and ISO ‘risk management toolbox’ helps organizations to keep ahead of threats that could be detrimental to their success. 

All businesses face threats on an ongoing basis, ranging from unpredictable political landscapes to rapidly evolving technology and competitive disruption. IEC and ISO have developed a toolbox of risk management standards to help businesses prepare, respond and recover more efficiently. It includes a newly updated standard on risk assessment techniques.

IEC 31010, Risk management — Risk assessment techniques, features a range of techniques to identify and understand risk. It has been updated to expand its range of applications and to add more detail than ever before. It complements ISO 31000, Risk management.

...

https://www.iso.org/news/ref2403.html

Archived data great for training and planning

By Glen Denny, Baron Services, Inc.

public safety historical weather dataHistorical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred. 

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

The optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

https://www.virtual-corp.com/business-continuity/table-top-exercise-revelations/

 

By Bob Farkas, PMP, AMBCI, SCRA

One of the most useful, insightful, and entertaining business continuity activities is table top exercises. These are generally well known to Business Continuity practitioners as an important step in emergency preparedness and disaster recovery planning. Table top exercises often involve key personnel discussing simulated scenarios, where their roles play a part, and how to respond in emergency situations. In this article, I will present a real example that can illustrate the type of useful information that can be obtained from an exercise. Moreover, an exercise scenario does not have to be complicated to provide value. To quote Leonardo Da Vinci, “Simplicity is the ultimate sophistication.”

Setting the Stage 

Recently, a West coast high tech firm requested assistance from Virtual Corporation with implementing a business resiliency program throughout the enterprise. Each department that was deemed in-scope completed a Business Impact Analysis (BIA) and developed its initial Business Continuity Plan. If the department Recovery Time Objective (RTO) was 24 hours or less, it would conclude its business continuity planning activities with a table top exercise.

One of the firm’s divisions located in the United Kingdom fell into the category that needed to complete a table top exercise. The exercise included participants from three critical departments which provide security monitoring services and support for their commercial clients. The local business continuity lead determined a building fire would be the appropriate scenario for the exercise.

The Dilemma

Once the exercise began, participants described their initial actions in responding to the building evacuation announcement. They pointed out that the company’s safety and evacuation procedures require that laptops be left behind at the employees’ work areas to facilitate and ensure everyone’s swift and safe evacuation from the building during a potential fire or other disruptive event.  The scenario was advanced to where the fire had been extinguished and the Fire Marshal declared the building unsafe to occupy. At this point, the participants in the exercise indicated management would instruct employees to go home. A major issue quickly became apparent. They would be unable to work remotely since their laptops remained in the building that they could no longer access. The short-term solution was to use their mobile phones to hand off work to other locations and manage work as best as possible with their mobile phones until their laptops were replaced.

During the discussions that followed, the question arose as to how quickly replacement laptops could be provisioned. Not soon enough it turned out; the company did not have a local (UK) IT service center. Laptops are supplied from the company’s facility in Dublin, Ireland. This led to a list of other issues and questions that needed to be addressed such as machine inventory, availability of pre-imaged machines, prioritization of need, expedited delivery and identifying alternate, local sources. The real magnitude and impact to these departments’ abilities to continue work was not fully considered until this exercise brought these issues to the forefront.

Exercises also challenge common assumptions and beliefs. In the building fire exercise scenario, virtually everyone’s initial reaction to not being able to work from their impacted location was that they would work remotely/from home without carefully considering the implications of that decision. In the building fire scenario described above, no one thought they’d be without a laptop until reminded that their laptops could not be retrieved. Raising such issues during the exercise, and thus one of the benefits of an exercise, is to force people to consider the situation more carefully and think through other alternative recovery options such as relocating to another facility (with available computers) or mitigations such as having a local laptop supplier.

Conclusion

Much can be learned from table top exercises as illustrated by this example. It is a valuable training and planning tool to improve responsiveness and organizational resiliency. However, such benefits can only be realized if exercises are done regularly and the lessons learned are applied. Similar to how regular physical exercise can benefit one’s personal well-being, table top and other business continuity exercises can also benefit an enterprise’s resiliency well-being. Therefore, exercise often.

About the Writer

Bob Farkas, PMP, AMBCI, SCRA
Manager, Project Management Office/Project Manager

Bob has been with Virtual Corporation since 2001 during which he has led many Business Impact Analysis (BIA), Business Continuity Planning, and Risk Assessments projects across health care, manufacturing, government, technology and other services industries. In addition, he has been instrumental in building and refining Virtual’s processes and toolkit bringing new approaches and insights to client engagements. His career spans materials engineering, programming, telecom marketing research, IT outsourcing and business continuity. Bob holds PMP, AMBCI and SCRA certifications and has a Master’s in Chemical Engineering from the New Jersey Institute of Technology and Bachelor’s in Metallurgical Engineering from McMaster University (Hamilton, Ontario.)

Monday, 01 July 2019 15:32

Table Top Exercise Revelations

It’s important for business continuity professionals to be knowledgeable about the ways being unprepared for disruptions can harm their organization. How else are they going to make the case to senior management that building a good continuity program is worth the effort? 

In today’s post, we’ll look at the costs, direct and indirect, of being underprepared for a disaster event.

THE CASE OF CHERNOBYL

The new HBO miniseries Chernobyl is getting a lot of attention this month. The story of the fire that broke out in one of the reactors of the Chernobyl nuclear power plant hits close to home for anyone involved in business continuity and crisis management. The incident demonstrates an extreme example of multiple failures from oversight, lack of emergency planning, poor design, and poor crisis management.

The costs of the Chernobyl accident are widely thought to include dozens of deaths from intense radiation exposure, thousands of cases of lifespans being shortened by radiation, and 100,000 square kilometers of land being contaminated by fallout. The disaster also cost hundreds of billions of dollars to clean up and was blamed by Mikhail Gorbachev for having brought down the Soviet Union.

The costs will probably not be equally high if there is a disaster at your organization for which people are unprepared. But the impacts could still be substantial.

 ...

https://www.mha-it.com/2019/06/19/being-unprepared/

The development follows speculation and concern among security experts that the attack group would expand its scope to the power grid.
 

The attackers behind the epic Triton/Trisis attack that in 2017 targeted and shut down a physical safety instrumentation system at a petrochemical plant in Saudi Arabia now have been discovered probing the networks of dozens of US and Asia-Pacific electric utilities.

Industrial-control system (ICS) security firm Dragos, which calls the attack group XENOTIME, says the attackers actually began scanning electric utility networks in the US and Asia-Pacific regions in late 2018 using similar tools and methods the attackers have used in targeting oil and gas companies in the Middle East and North America.

The findings follow speculation and concern among security experts that the Triton group would expand its scope into the power grid. To date, the only publicly known successful attack was that of the Saudi Arabian plant in 2017. In that attack, the Triton/Trisis malware was discovered embedded in a Schneider Electric customer's safety system controller. The attack could have been catastrophic, but an apparent misstep by the attackers inadvertently shut down the Schneider Triconex Emergency Shut Down (ESD) system.

...

https://www.darkreading.com/perimeter/triton-attackers-seen-scanning-us-power-grid-networks/d/d-id/1334968

Security should be a high priority for every organization. Unfortunately, there is a serious shortage of quality cybersecurity staffers on the market.

hero-blog-cartoon-security-practices.jpg

Who’s overseeing your organization’s security? Are they equipped to secure your data and prevent ransomware attacks, or are they more likely to be scanning for viruses with a metal detector and patching systems with tape and paper?

When (ISC)2 asked cybersecurity professionals about gaps in their workforce, 63% said there’s a short supply of cybersecurity-focused IT employees at their companies. And 60% believe their organizations are at “moderate-to-extreme” risk of attacks because of this shortage.    

Mitch Kavalsky, Director, Security Governance and Risk at Sungard Availability Services (Sungard AS), believes you can solve this problem by focusing less on hiring cybersecurity personnel with expertise in specific technologies, and more on bringing in employees with well-rounded security-focused skillsets capable of adapting as needed.

But as Bob Petersen, CTO Architect at Sungard AS, points out, a company’s overall security should not be limited to the security team; it needs to be a key component of everyone’s job. “There needs to be more of a push to drive cybersecurity fundamentals into different IT roles. The role of the security team should be to set standards, educate and monitor. They can’t do it all themselves.”

Invest in your company’s security. But invest in it the right way – with the right people. If not, you’re bound to have more problems than solutions.

Cutting-Edge Baron Radars Reinforce Trust—If You Understand the Basic Mechanics of the Tool

 

By Dan Gallagher, Enterprise Product Manager, Baron

In late November of 2014, residents of the Buffalo, New York area busied themselves digging out of the heaviest winter snowfall event since the holiday season of 1945. Over five feet of snow blanketed some areas east of the city. This sort of extreme weather levies serious damage: thousands of motorists were stranded, hundreds of roofs and other structures collapsed, and, tragically, thirteen people lost their lives. Perhaps this high toll would have been even greater had not diligent meteorologists caught the telltale signs of lake effect snow days before by studying lake temperature and wind trajectories at various heights. Officials warned of 3-5 inches per hour of precipitation over a day before the event began. However, this extreme snowfall did not appear on the T.V. weather map in the way such a major event usually does. The casual weather-watcher might have looked to radar imaging to make sense of the experience, but that most conspicuous weather instrument to most people was essentially blind to the phenomenon. This sort of apparent failure could make it difficult for people to believe weather technology, or create doubt in the minds of officials charged with making tough decisions for public or institutional safety—and each of those outcomes could endanger communities. While Doppler radar is a critical tool in evaluating weather conditions, it is important for institutions to understand the mechanisms it employs, its strengths and weaknesses, and available methods for analyzing raw radar data. Why does it seem like radar got it wrong for Buffalo?

Doppler RADAR Fundamentals

RADAR, or RAdio Detection And Ranging, was a new technology when Buffalo saw its last huge snow event. Developed during World War Two, radar was first used for military applications. The tool could detect the position and movement of an object, like an enemy airplane. So, when radar operators on battleships picked up signs of rain, it made their jobs more difficult by cluttering radar data with unwanted information. Then, after the War, that clutter became the target.

Picture2The first generation of weather radars were cutting edge at the time, but were rudimentary compared to today’s systems. Radar works by sending out radio waves, short bursts of energy that bounce off of objects at nearly the speed of light. When waves bounce back in the direction of the radar dish, their direction reveals where an object is relative to the system. These waves are sent out in bands. The first weather radar system, installed in Miami in 1959, could only send waves along horizontal bands, and operators had to manually adjust the elevation angle. This meant that the information radar could provide about an object was limited to a single plane. A ball and a cylinder would look the same as each other on the radar screen because only one dimension of an object was accounted for.

The next generation of radar, embraced in the late 1980s and early ‘90s, offered more information than simply location with the introduction of Doppler technology. The major advance that Doppler radar provided was that it could measure an object’s velocity. These radars can detect what is moving toward it or away from it by analyzing the shift in the frequency of returning radio waves. (Still, radar cannot ‘see’ what is moving orthogonally to the beam.) This allows atmospheric scientists and meteorologists to identify additional characteristics in a storm. For example, they can identify the presence of rotating winds in the atmosphere, which can be a strong indication of a tornado. Generally represented in today’s system as bright red juxtaposed to bright green, data on wind speed and location provides greater insight into such important weather events such as tornadoes.

Dual Polarization: The Modern Radar

The radars the National Weather Service (NWS) uses today form a network of 171 radars located throughout the United States. In 2007, Baron Services, along with their partner L3 Stratus, were selected to work with the NWS to modernize the entire network of radars to add Dual Polarization technology.

Early Doppler systems had not resolved a major limitation inherited from previous generations: the single plane of weather information. Embraced in the mid-2000s, dual-polarization technology changed that. By using both horizontal and vertical pulses, dual-pol radars offer another dimension of information on objects—a true cross-section of what is occurring in the atmosphere. With data on both the horizontal and vertical attributes of an object, forecasters can clearly identify rain, hail, snow, and other flying objects such as insects, not to mention smoke from wildfires, dust, and military chaff. Each precipitation type registers a distinct shape. For example, hail tumbles as it falls, so it appears to be almost exactly round to dual-pol radars. This additional information provides meteorologists and those responsible for monitoring the weather with very valuable information that allows them to make more informed decisions about the presence of hail, the amount of rainfall that may fall, and any change from liquid to frozen precipitation.

Dual polarization technology is especially useful in observing winter weather. Because it can differentiate between types of precipitation, it can identify where the melting layer is with precision. This allows forecasters to better evaluate what type of precipitation to expect, as they can analyze information about the path that precipitation will face on the way to the ground. If a snowflake will pass through a warm layer, for example, it may become a raindrop, and that raindrop may turn into freezing rain when it hits a colder surface.

Radar Limitations – and How Baron Uses Technology to Fight Back

limitsStill, radar technology has its undeniable limitations. Only so much of the vertical space is observed, since the atmosphere closest to the ground and directly above the radar are typically not scanned. This is best illustrated by the cone of silence phenomenon. Because radars do not transmit higher than a certain angle relative to the horizon, there is a cone-shaped blind spot above each radar.

Also, because radio waves are physical, clutter can make data hard to parse. Tall buildings, for example, give no useful information to meteorologists, and could skew the data. However, Baron has introduced cutting-edge radar processing products that can closely analyze the data that returns to a radar to determine how the atmosphere has impacted the path of a beam. Importantly, though, computers are not the only way, or always the best way, to account for deviations in data. Thus, Baron’s human resources—their in-house weather experts—are daily engaged with monitoring radar outputs.

Nevertheless, radar cannot achieve every goal some members of the public expect it to. Drizzle, for example, does not show up very well at times, because of the extremely small size of particulates and the fact that most drizzle occurs below the height of the radar beam. So, people expecting dry conditions might be puzzled by slight sprinkles on their way to work. Another example, as mentioned earlier, is the difficulty of lake effect snow.

baron buffaloPicture1A lake effect snowfall occurs when cold air passes over the warmer water of a lake. This phenomenon is common in the Great Lakes region. Why does radar have difficulty observing it? The culpable limitation lies with the angle of radar beam. In the same way that radio waves are not typically sent straight up from the ground, they are also not sent directly horizontal. Buildings, small topographical changes, and the like would foil the effectiveness of low-level radar sweeps in many cases anyway, but the further an area is away from the source of the beam, the higher the blind spot. Imagine a triangle with a small angle at one corner—the further the lines travel, the further they part. Lake effect snow is a low-level phenomenon. So, when Buffalo shivered under feet of snow in 2014, the radar essentially missed the event because the lowest beams passed over the highest signs of snow.

Reading the Radar Display: What to Remember

Understanding the history of radar technology and knowing the source of the radar display can prepare anyone to better utilize radar information. Conscientious administrators and officials can recall these radar facts as part of their decision-making process when referencing radar data:

1. Radar cannot see everything.

As apparent in the Buffalo snowfall example, the physical limitations of radar mean that a radar does not ‘see’ what occurs close to the ground. Additionally, since what radar ‘sees’ in the atmosphere is impacted by distance as the beam of energy continues to radiate away from the source, the precision of data can depend on the location of weather relative to the location of a radar. Some radar displays are compilations of data from multiple radars, and these multiple radar displays are often pieced together to account for gaps in radar coverage.

2. No one image tells the whole story.

Weather evolves in stages, so it is important to watch storm motion, growth and decay. When watching weather on radar, how it trends from one scan to the next should never be ignored. Evaluating trends allows watchers to better predict turbulent weather. Then, for the best results, avoid focusing on only one storm. People tend to watch a particular storm and not the entire picture, or what is down stream of the “worst” storm. This can distract from impactful conditions and trends.

3. Raw data can be misleading.

Even when radar ‘sees’ weather accurately, it can at times mislead watchers. Virga, for example, is precipitation that evaporates before it reaches the ground due to drier air closer to the surface.  Just because the radar shows precipitation does not always mean precipitation will reach the ground. In some cases, there is no substitute for using other information to determine what is actually happening. That is why NEXRAD/Baron delivers more than just a radar image to give users a better understanding of what the weather is doing. It gives information on Severe Winds, Threats, Velocity, Hail Tracks, and other conditions.

Baron Radar Equipment Gives Decision-Makers Actionable Information

Baron Threat Net is used by stadiums, emergency management, schools, racetracks, and other institutions because it features radar technology that makes it easier for users to identify what the risk is by removing all of the extraneous information and processing inevitable problems in the data to create a simplified picture of the weather. However, to do that, it uses computing power, scientific expertise, and other tools. Radar should not be solely relied upon to identify every weather phenomenon. It is important that decision-makers understand the limitations of the technologies they utilize in order to not only make the best decisions for their communities, but to reinforce community trust.

First forecast: ‘Don’t let a weak El Niño fool you’

 

By Brian Wooley & Paul Licata of Interstate Restoration

The first hurricane prediction for 2019 was less alarming than many prior years, with only two major hurricanes forecast to hit the U.S. coast. Hurricane researchers at Colorado State University announced in April they foresee a slightly below-average Atlantic hurricane season, citing a weak El Niño and a slightly cooler tropical Atlantic ocean as major contributors. Their second, often more accurate forecast is due June 4, and NOAA announces its first forecast of the hurricane season on May 23.

But don’t be fooled. Early predictions in 2017 also pointed to a slightly below-average Atlantic hurricane season, but in that year hurricanes Harvey, Irma and Maria slammed into the Atlantic and Gulf coasts as well as Puerto Rico and became three of the five costliest hurricanes in U.S. history.

Each storm is different and unpredictable which means business and property owners shouldn’t become complacent; it’s extremely important to prepare in advance for major hurricanes. According to the Federal Emergency Management Agency (FEMA), 40 percent of businesses do not reopen after a disaster and another 25 percent fail within one year. Preplanning is important because it will streamline operations during and after a storm, lead to a quicker recovery and potentially lower insurance claims costs.

According to an April 10 webinar hosted by Dr. Phil Klotzbach of the Department of Atmospheric Science at CSU, researchers are predicting 13 named storms during the 2019 Atlantic hurricane season, with two becoming major hurricanes. (In comparison, in 2018, CSU predicted 14 named storms with three reaching major hurricane strength.) Historical data, combined with atmospheric research, points to the U.S. East Coast and Florida Peninsula with about a 28 percent chance of getting hit by a major hurricane (average for the last century is 31 percent). The Gulf Coast, from the Florida panhandle westward to Brownsville, Texas, is forecast to have a 28 percent chance (average for the last century is 30 percent). The Caribbean has a 39 percent, down from the 42 percent average for the last century.

The states with the highest probability to receive sustained hurricane-force winds include Florida (47 percent), Texas (30 percent), Louisiana (28 percent), and North Carolina (26 percent), according to Klotzbach. But hurricanes can cut a wide swath, he says, as Hurricane Michael did in October 2018 as it moved into Georgia, causing high wind damage and gusts as high as 115 mph in the southwest part of the state.

Klotzbach outlined how total dollar losses from Atlantic hurricanes are increasing each year, exacerbated primarily from a doubling of the U.S. population since the 1950s and from larger homes being built, with an average of more than 2,600 square feet. It’s shocking to consider that if a category 4 storm struck Miami today, similar to the one that leveled the city in 1926, it’s estimated it would cost $200 billion to rebuild. That exceeds $160 billion in damage caused by Hurricane Katrina in 2005.

Experienced recovery experts, like those at Interstate Restoration, are skilled at delivering a quick response and delegating teams to react as soon as a storm is named. Once they identify approximately where the storm will land on U.S. soil and assess its intensity, they allocate assets, resources, and equipment as needed and keep in close contact with all clients in the path of the storm.  Staging efforts to a safe area begin many days before the event.

Powerful hurricanes, such as Irma, can disrupt businesses for weeks or months, which is why pre-planning is so important. It starts with hiring a disaster response company in advance. By establishing a long term partnership before a disaster happens, business and property owners can ensure they are on the priority list for getting repairs done quickly. The restoration partner can also assist with performing a pre-loss property assessment, recovery planning and working closely with insurance.

Quick recovery is made more difficult when business and property owners neglect proper preparations. So despite the prediction calling for fewer storms expected in 2019, it’s always better to be prepared as Mother Nature can be destructive.

 ♦

Brian Wooley, vice president of operations, and Paul Licata, national account manager, at Interstate Restoration, a national disaster-response company based in Ft. Worth, Texas.

The first anniversary of GDPR is rapidly approaching on May 25. Tech companies used the past year to learn how to navigate the guidelines set in place by the law while ensuring compliance with similar laws globally. After all, for companies who violate GDPR, the legal ramifications include fines that amount to the higher between a four percent worldwide revenue or around $22.4 million.

Although the GDPR primarily applies to countries in the European Union, the law’s reach has extended beyond the continent, affecting tech companies stateside. As long as a US-based company has a web presence in the EU, that company must also follow GDPR guidelines. In an increasingly globalized world, that leaves few companies outside the mix.

GDPR acts as a model for tech companies looking to focus on the consumer’s security and data protection and compliance. A year into its existence, there is still some work surrounding the comprehension and application of the GDPR’s requirements. For GDPR’s anniversary, we’ve gathered a few experts in IT to shed some light on the GDPR, its global effects and how to ensure data protection.

Alan Conboy, Office of the CTO, Scale Computing:

“With the one-year anniversary of GDPR approaching, the regulation has made an impact in data protection around the world this century. One year later with the high standards from GDPR, organizations are still actively working to manage and maintain data compliance, ensuring it’s made private and protected to comply with the regulation. With the fast pace of technology innovation, one way IT professionals have been meeting compliance is by designing solutions with data security in mind. Employing IT infrastructure that is stable and secure, with data simplicity and ease-of-use is vital for maintaining GDPR compliance now and in the future,” said Alan Conboy, Office of the CTO, Scale Computing.

Samantha Humphries, senior product marketing manager, Exabeam:

“As the GDPR celebrates its first birthday, there are some parallels to be drawn between the regulation and that of a human reaching a similar milestone. It’s cut some teeth: to the tune of over €55 million – mainly at the expense of Google, who received the largest fine to date. It is still finding its feet: the European Data Protection Board are regularly posting, and requesting public feedback on, new guidance. It’s created a lot of noise: for EU data subjects, our web experience has arguably taken a turn for the worse with some sites blocking all access to EU IP addresses and many more opting to bombard us with multiple questions before we can get anywhere near their content (although at least the barrage of emails requesting us to re-subscribe has died down). And it has definitely kept its parents busy: in the first nine months, over 200,000 cases were logged with supervisory authorities, of which ~65,000 were related to data breaches.

With the GDPR still very much in its infancy, many organisations are still getting to grips with exactly how to meet its requirements. The fundamentals remain true: know what personal data you have, know why you have it, limit access to a need-to-know basis, keep it safe, only keep it as long as you need it, and be transparent about what you’re going to do with it. The devil is in the detail, so keeping a close watch on developments from the EDPB will help provide clarity as the regulation continues to mature,” said Samantha Humphries, senior product marketing manager, Exabeam.

Rod Harrison, CTO, Nexsan, a StorCentric Company:

“Over the past 12 months, GDPR has provided the perfect opportunity for organisations to reassess whether their IT infrastructure can safeguard critical data, or if it needs to be upgraded to meet the new regulations. Coupled with the increasing threat of cyber attacks, one of the main challenges businesses have to contend with is the right to be forgotten – and this is where most have been falling short.

Any EU customers can request that companies delete all of the data that is held about them, permanently. The difficulty here lies in being able to comprehensively trace all of it, and this has given the storage industry an opportunity to expand its scope of influence within an IT infrastructure. Archive storage can not only support secure data storage in accordance with GDPR, but also enable businesses to accurately identify all of the data about a customer, allowing it to be quickly removed from all records. And when, not if, your business suffers a data breach, you can rest assured that customers who have asked you to delete data won’t suddenly discover that it has been compromised,” said Rod Harrison, CTO, Nexsan, a StorCentric Company.

Alex Fielding, iCEO and Founder, Ripcord:

“If your company handles any data of European Union residents, you’re subjected to the regulations, expectations and potential consequences of GDPR. Critical elements of the regulation like right to access, right to be forgotten, data portability and privacy by design all require a company’s data management to be nimble, accessible and—most importantly—digital.

Notably, GDPR grants EU residents rights to access, which means companies must have a documented understanding of whose data is being collected and processed, where that data is being housed and for what purpose it’s being obtained. The company must also be able to provide a digital report of that data management to any EU resident who requests it within a reasonable amount of time. This is a tall order for a company as is, but compliance becomes almost unimaginable if a company’s current and archival data is not available digitally.

My advice to anyone struggling to achieve and maintain GDPR compliance is to develop and implement a full compliance program, beginning with digitizing and cataloguing your customer data. When you unlock the data stored within your paper records, you set your company up for compliance success,” said Alex Fielding, iCEO and founder of Ripcord.

Wendy Foote, Senior Contracts Manager, WhiteHat Security:

“Last year, the California Consumer Privacy Act (CCPA) was signed into law, which aims to provide consumers with specific rights over their personal data held by companies. These rights are very similar to those given to EU-based individuals by GDPR one year ago. The CCPA, set for Jan. 1, 2020, is the first of its kind in the U.S., and while good for consumers, affected companies will have to make a significant effort to implement the cybersecurity requirements. Plus, it will add yet another variance in the patchwork of divergent US data protection laws that companies already struggle to reconcile.

If GDPR can be implemented to protect all of the EU, could the CCPA be indicative of the potential for a cohesive US federal privacy law? This idea has strong bipartisan congressional support, and several large companies have come out in favor of it. There are draft bills in circulation, and with a new class of representatives recently sworn into Congress and the CCPA effectively putting a deadline on the debate, there may finally be a national resolution to the US consumer data privacy problem. However, the likelihood of it passing in 2019 is slim.

A single privacy framework must include flexibility and scalability to accommodate differences in size, complexity, and data needs of companies that will be subject to the law. It will take several months of negotiation to agree on the approach. But we are excited to see what the future brings for data privacy in our country and have GDPR to look to as a strong example,” said Wendy Foote, Senior Contracts Manager, WhiteHat Security.

Scott Parker, Director, product marketing, Sinequa:

“Even before the EU’s GDPR regulation took effect in 2018, organizations had been investing heavily in related initiatives. Since last year, the law has effectively standardized the way many organizations report on data privacy breaches. However, one area where the regulation has proven less effective is allowing regulators to levy fines against companies that have mishandled customer data.

From this perspective, organizations perceiving the regulation as an opportunity versus a cost burden have experienced the greatest gains. For those that continue to struggle with GDPR compliance, we recommend looking at technologies that offer an automated approach for processing and sorting large volumes of content and data intelligently. This alleviates the cognitive burden on knowledge workers, allowing them to focus on more productive work, and ensures that the information they are using is contextual and directly aligned with their goals and the tasks at hand,” said Scott Parker, Director, product marketing, Sinequa.

Caroline Seymour, VP, product marketing, Zerto:

“Last May, the European Union implemented GDPR, but its implications reach far beyond the borders of the EU. Companies in the US that interact with data from the EU must also meet its compliance measures, or risk global repercussions.

Despite the gravity of these regulations and their mutually agreed upon need, many companies may remain in a compliance ‘no man’s land’– not fully confident in their compliance status. And as the number of consequential data breaches continue to climb globally, it is increasingly critical that companies meet GDPR requirements. My advice to those impacted companies still operating in a gray area is to ensure that their businesses are IT resilient by building an overall compliance program.

By developing and implementing a full compliance program with IT resilience at its core, companies can leverage backup via continuous data protection, making their data easily searchable over time and ultimately, preventing lasting damage from any data breach that may occur.

With a stable, unified and flexible IT infrastructure in place, companies can protect against modern threats, ensure regulation standards are met, and help provide peace of mind to both organizational leadership and customers,” said Caroline Seymour, VP, product marketing, Zerto.

Matt VanderZwaag, Director, product development, US Signal:

“With the one-year anniversary of GDPR compliance upcoming, meeting compliance standards can still be a somewhat daunting task for many organizations. A year later, data protection is a topic that all organizations should be constantly discussing and putting into practice to ensure that GDPR compliance remains a top priority.

Moving to an infrastructure provided by a managed service provider with expertise is one solution, not only for maintaining GDPR compliance, but also implementing future data protection compliance standards that are likely to emerge. Service providers can ensure organizations are remaining compliant, in addition to offering advice and education to ensure your business has the skills to manage and maintain future regulations,” said Matt VanderZwaag, Director, product development, US Signal.

Lex Boost, CEO, Leaseweb USA:

“GDPR has played an important role in shifting attitude toward data privacy all around the world, not just in the EU. Companies doing business in GDPR-regulated areas have had to seriously re-evaluate their data center strategies throughout the past year. In addition, countries outside of the GDPR regulated areas are seriously considering better legislation for protecting data.

From a hosting perspective, managing cloud infrastructures, particularly hybrid ones, can be challenging, especially when striving to meet compliance regulations. It is important to find a team of professionals who can guide how you manage your data and still stay within the law. Establishing the best solution does not have to be a task left solely to the IT team. Hosting providers can help provide knowledge and guidance to help you manage your data in a world shaped by increasingly stringent data protection legislation,” said Lex Boost, CEO, Leaseweb USA.

Neil Barton, CTO, WhereScape:

“Despite the warnings of high potential GDPR fines for companies in violation of the law, it was never clear how serious the repercussions would be. Since the GDPR’s implementation, authorities have made an example of Internet giants. These high-profile fines are meant to serve as a warning to all of us.

Whether your organization is currently impacted by the GDPR or not, now’s the time to prepare for future legislation that will undoubtedly spread worldwide given data privacy concerns. It’s a huge task to get your data house in order, but automation can lessen the burden. Data infrastructure automation software can help companies be ready for compliance by ensuring all data is easily identifiable, explainable and ready for extraction if needed. Using automation to easily discover data areas of concern, tag them and track data lineage throughout your environment provides organizations with greater visibility and a faster ability to act. In the event of an audit or a request to remove an individual’s data, automation software can provide the ready capabilities needed,” said Neil Barton, CTO, WhereScape

By Brian Zawada
Director of Consulting Services, AvalutionConsulting

Adaptive BC has done a great job of stirring up the business continuity profession with some new ideas. At Avalution – we love pushing the envelope and try new things, so we were excited to learn more about the ideas in the Adaptive BC manifesto, as well as the accompanying book and training.

While Adaptive BC identified some real problems with the business continuity approaches taken by some organizations, their solutions aren’t for everyone (and not all organizations experience these problems). In fact, their focus is so narrow, we think it’s of little practical use for most organizations.

Business Continuity as Defined by Adaptive BC

From AdaptiveBCP.org: “Adaptive Business Continuity is an approach for continuously improving an organization’s recovery capabilities, with a focus on the continued delivery of services following an unexpected unavailability of people, locations, and/or resources” (emphasis on "recovery” added by Avalution).

As is clear from their definition and made explicit in the accompanying book (Adaptive Business Continuity: A New Approach - 2015), Adaptive BC is exclusively focused on improving recovery when faced with unavailability of people, locations, and other resources.

This approach – or focus – leaves out a long list of responsibilities that add considerable value to most business continuity management programs, such as (the quotes below are taken from Adaptive BC’s book):

...

https://www.linkedin.com/pulse/adaptive-bc-most-brian-zawada/

Tuesday, 21 May 2019 19:33

Adaptive BC: Not for Most

I have been the Head of Thought Leadership at the Business Continuity Institute since September 2018. In the eight months since my start date, I have been quizzed about many topics by our members – Brexit preparedness, supply chain resilience, horizon scanning and cyber risk to name but a few. However, the topic which I get pressed to answer more than any other is “What is your view on Adaptive Business Continuity?”.

My research on the subject immediately took me to the Adaptive BC website and led me to purchase Adaptive BC – A New Approach in order to learn about the subject. Since this time, interest in the Adaptive-BC entitled “revolution” has gained significant traction with numerous articles from both the founders of Adaptive BC and those who are more sceptical about the subject. Articles such as David Linstedt’s 2018: The BC Ship Continues to Sink and Mark Armour’s Adaptive Business Continuity: Clearly Different, Arguably Better are being met with writing such as Alberto Mattia’s Adaptive BC Reinvents the Wheel and the very recent article from Jean Rowe challenging Adaptive BC’s approach (Adaptive BC: the business continuity industry’s version of The Emperor’s New Clothes?).

The so-called “revolution” has certainly stirred the BC community – but are the ructions justified?

...

https://www.thebci.org/news/adaptive-bc-a-revolution-or-a-useful-set-of-tools-and-approaches-in-the-right-circumstances.html

Mobile apps have become the touchpoint of choice for millions of people to manage their finances, and Forrester regularly reviews those of leading banks. We just published our latest evaluations of the apps of the big five Canadian banks: BMO, CIBC, RBC, Scotiabank, and TD Canada Trust.

Overall, they’ve raised the bar, striking a good balance between delivering robust, high-value functionality and ensuring that it’s easy for customers to get that value with a strong user experience. The top two banks in our review, CIBC and RBC, both made significant improvements to their app user experience (UX) over the past year by focusing on streamlining navigation and workflows. But our analysis also revealed ways all banks can — and should — improve, such as:

Banks should give customers a better view of their financial health. Banks we reviewed don’t provide external account aggregation, and they put the burden on the user to stay on top of their monthly inflows and outflows. They don’t offer useful features such as an account history view that displays projected balances after scheduled transactions hit the account — something leading banks in other regions of the world (like Europe and the US) do offer.

...

https://go.forrester.com/blogs/the-good-the-bad-the-ugly-of-canadian-mobile-banking-experiences-in-2019/

Learn about some of the latest findings on the devastation from a hurricane, and how to prepare your business to withstand this natural catastrophe. Read this infographic by Agility Recovery.

Agility HurricaneInfographic

Thursday, 16 May 2019 16:06

The Biggest Hurricane Risk?

Adaptive Business Continuity (Adaptive BC) is an alternative approach to business continuity planning, ‘based on the belief that the practices of traditional BC planning have become increasingly ineffectual’. In this article, Jean Rowe challenges the Adaptive BC approach.

We all can appreciate the intent to innovate, but innovation, in the end, must meet the needs of the consumer.  With this in mind, the Adaptive BC approach (The Adaptive BC Manifesto 2016), uses ‘innovation’ as a key message.

However, I believe that, upon reflection, the Adaptive BC approach can be viewed as the business continuity industry’s version of The Emperor’s New Clothes.

The Emperor’s New Clothes is “a short tale by Hans Christian Andersen, about two weavers who promise an emperor a new suit of clothes that they say is invisible to those who are unfit for their positions, stupid, or incompetent.”  As professional practitioners, we need to dispel the myth that using the Adaptive BC approach is, metaphorically speaking, draping the Emperor (i.e. top management) in some finely stitched ‘innovative’ business continuity designer clothes that only those competent enough can see the beauty of the design.   

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/3993-adaptive-bc-the-business-continuity-industry-s-version-of-the-emperor-s-new-clothes

Strategic Overview

Disasters disrupt preexisting networks of demand and supply. Quickly reestablishing flows of water, food, pharmaceuticals, medical goods, fuel, and other crucial commodities is almost always in the immediate interest of survivors and longer-term recovery.

When there has been catastrophic damage to critical infrastructure, such as the electrical grid and telecommunications systems, there will be an urgent need to resume—and possibly redirect— preexisting flows of life-preserving resources. In the case of densely populated places, when survivors number in the hundreds of thousands, only preexisting sources of supply have enough volume and potential flow to fulfill demand.

During the disasters in Japan (2011) and Hurricane Maria in Puerto Rico (2017), sources of supply remained sufficient to fulfill survivor needs. But the loss of critical infrastructure, the surge in demand, and limited distribution capabilities (e.g., trucks, truckers, loading locations, and more) seriously complicated existing distribution capacity. If emergency managers can develop an understanding of fundamental network behaviors, they can help avoid unintentionally suppressing supply chain resilience, with the ultimate goal of ensuring emergency managers “do no harm” to surviving capacity.

Delayed and uneven delivery can prompt consumer uncertainty that increases demand and further challenges delivery capabilities. On the worst days, involving large populations of survivors, emergency management can actively facilitate the maximum possible flow of preexisting sources of supply: public water systems; commercial water/beverage bottlers; food, pharmaceutical, and medical goods distributors; fuel providers; and others. To do this effectively requires a level of network understanding and a set of relationships that must be cultivated prior to the extreme event. Ideally, key private and public stakeholders will conceive, test, and refine strategic concepts and operational preparedness through recurring workshops and tabletop exercises. When possible, mitigation measures will be pre-loaded. In this way, private-public and private-private relationships are reinforced through practical problem solving.

Contemporary supply chains share important functional characteristics, but risk and resilience are generally anchored in local-to-regional conditions. What best advances supply chain resilience in Miami will probably share strategic similarities with Seattle, but will be highly differentiated in terms of operations and who is involved.

In recent years the Department of Homeland Security (DHS) and the Federal Emergency Management Agency (FEMA) have engaged with state, local, tribal and territorial partners, private sector, civic sector, and the academic community in a series of innovative interactions to enhance supply chain resilience. This guide reflects the issues explored and the lessons (still being) learned from this process. The guide is designed to help emergency managers at every level think through the challenge and opportunity presented by supply chain resilience. Specific suggestions are made related to research, outreach, and action.

...

https://www.fema.gov/media-library-data/1555328671083-d9422177bd55d9c6fafc327a6b239290/SupplyChainResilienceGuide-April2019.pdf

Tuesday, 30 April 2019 14:48

FEMA Supply Chain Resilience Guide

Archived data great for training and planning

By GLEN DENNY, Baron Services, Inc.

Historical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred.

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

public safety historical weather dataThe optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

Bidle TrevorBy TREVOR BIDLE, information security and compliance officer, US Signal

World Backup Day purposely falls the day before April Fool’s Day. The founders of the initiative, which takes place March 31, want to impress upon the public that the loss of data resulting from a failure to back up is no joke.

It’s surprising to find that nearly 30 percent of us have never backed up our data. Even more shocking are studies stating that only four in ten companies have a fully documented disaster recovery (DR) plan in place. Of those companies that have a plan, only 40 percent test it at least once a year.

Data has become an integral component of our personal and professional lives, from mission-critical business information to personal photos and videos. DR plans don’t have to be overly complicated. They just need to exist and be regularly tested to ensure they work as planned.

Ahead of World Backup Day, here are some of the key components to consider in a DR plan.

The Basics of Backup

A backup creates data copies at regular intervals that are saved to a hard drive, tape, disk or virtual tape library and stored offsite. If you lose your original data, you can retrieve copies of it. This is particularly useful if your data became corrupted at some point. You simply “roll back” to a copy of the data before it was corrupted.

Other than storage media costs, backup is relatively inexpensive. It may take time for your IT staff to retrieve and recover the data, however, so backup is usually reserved for data you can do without for 24 hours or more.  It doesn’t do much for ensuring continued operations.

Application performance can also be affected each time a backup is done. However, backup is a cost-effectives means of meeting certain compliance requirements and for granular recovery, such as recovering a single user’s emails from three years ago. It serves as a “safety net” for your data and has a distinct place in your DR plan.

You can opt for a third-party vendor to handle your backups. For maximum efficiency and security, companies that offer cloud-based backups many be preferable. Some allow you to backup data from any physical or virtual infrastructure, or Windows workstation, to their cloud service. You can then access your data any time, from anywhere. Some also offer backups as a managed service, handling everything from remediation of backup failures to system/file restores to source.

Stay Up-To-Date with Data Replication

Like backup, data replication copies and moves data to another location. The difference is that replication copies data in real- or near-real time, so you have a more up-to-date copy.

Replication is usually performed outside your operating system, in the cloud. Because a copy of all your mission-critical data is there, you can “failover” and migrate production seamlessly. There’s no need for wait for backup tapes to be pulled.

Replication costs more than backup, so it’s often reserved for mission-critical applications that must be up and running for operations to continue during any business interruption. That makes it a key component of a DR plan.

Keep in mind is that replication copies every change, even if the change resulted from an error or a virus. To access data before a change, the replication process must be combined with continuous data protection or another type of technology to create recovery points to roll back to if required. That’s one of the benefits of a Disaster Recovery as a Service (DRaaS) solution.

Planning for Disasters

DRaaS solutions offer benefits that make them an attractive option for integrating into a DR plan. By employing true continuous data protection, a DRaaS solution can offer a recovery point objective (RPO) of a few seconds. Applications can be recovered instantly and automatically — in some cases with a service level agreement (SLA) based RTO of minutes.

DRaaS solutions also use scalable infrastructure, allowing virtual access of assets with little or no hardware and software expenditures. This saves on software licenses and hardware. Because DRaaS solutions are managed by third parties, your internal IT resources are freed up for other initiatives. DRaaS platforms vary, so research your options to find the one that best meets your needs.

A DR plan is basically a data protection strategy, one that contains numerous components to help ensure the data your business needs is there when it is needed — even if a manmade or natural disaster strikes.

Trevor Bidle is information security and compliance officer for US Signal, the leading end-to-end solutions provider, since October 2015. Previously, Bidle was the vice president of engineering at US Signal. Bidle is a certified information systems auditor and is completing his Masters in Cybersecurity Policy and Compliance at The George Washington University.

ERAU students generate forecasts with eye-catching daily graphics

Embry-Riddle Aeronautic University (ERAU) decided to amp up its broadcast meteorology classes with professional weather graphics and precision storm tracking tools that can be used to illustrate complex weather conditions and explain weather concepts to students. The customizable graphics platform enables the university to incorporate a range of other available weather data and create graphics that work well in the classroom environment. Providing daily weather graphics every day, including holidays, helps the university tell the most important national and regional weather story of the day. Expanding the tools student forecasters have on hand, the weather platform provides exceptional analysis and learning opportunities.

First used for broadcast meteorology classes, the new graphic system is now being used for weather analysis and forecasting, aviation weather, and tropical meteorology classes. ERAU continues to expand its use to create more content for the website and as a teaching tool for student pilots and a variety of other situations. And students are sitting up and taking notice. Enrollment in broadcast meteorology classes has more than doubled since they began using the new tools.

Explanations work better with good graphics

Robert Eicher, Assistant Professor of Meteorology, was searching around for a high quality instructional weather analysis and graphics system for his broadcast meteorology class. Before coming to ERAU, Eicher had worked as a television weather broadcaster for two decades. He knew the power of good graphics in explaining weather to audiences and was looking to extend that to his students.

“Lectures are usually accompanied by PowerPoint presentations with a lot of words,” Eicher explains. “As they say, a picture is worth a thousand words – it is easier to explain what’s going on if you have a good graphic. And animated graphics go a lot farther for illustrating what we are teaching about weather.”

Professor Eicher began shopping around for a weather analysis system that would fit into an instructional environment. After looking at available options, he eventually opted for Baron Lynx™, which combines weather graphics, weather analysis and storm tracking into a single platform. He had familiarity with Baron weather products, having used them at television stations in Orlando, Florida and Charlotte, North Carolina.

The Lynx platform includes several components. One area is dedicated to weather analysis, where students analyze the weather data cross the continental United States. Another area enables students to assemble and prepare the weather show and deliver it during a weather cast. The third is a creative component dedicated to weather graphics, and allows students to generate new weather graphics using existing graphical elements or by creating entirely new artwork.

Lynx was developed with the direct input of more than 70 broadcast professionals, including meteorologists and news directors. When introduced in 2016, Lynx garnered rave reviews for telling captivating weather stories and dominating station-defining moments. TV stations liked that Lynx offered them a scalable architecture that they could configure specifically to their own needs. With that came an arsenal of tools, including wall interaction, instant social media posting, forecast editing, daily graphics, and of course storm analysis. Integration across all platforms – on-air, online, and mobile – was another big plus for weather news professionals.

For Professor Eicher, the two deciding factors in favor of selecting Lynx were value for the money and customizability. “Compared to other options I looked at, you get a lot more for your money – a bigger bang for the buck. I also liked the customizability, which works well for our unique situation. As a university, we are already getting a ton of data from an existing National Oceanic and Atmospheric Administration (NOAA) data port. I like that Lynx allows us to incorporate the data we are getting and make good graphics with it. We can get in and tinker around and do some innovative things for the classroom environment.”

One unique example involved teaching aviation school students about the potential for icing. Eicher went into Lynx and adjusted contours at an atmospheric air pressure of 700 millibars (at 10,000 feet) to show only the 32 degree line, so the students could see where the freezing level was at 10,000 feet. He then adjusted the contours of relative humidity that were 75 percent and above. The result illustrated where the temperature and humidity combined to produce ice, showing the icing potential at that flying level. “It is a unique graphic that I don’t think anyone else has,” noted Eicher.

Baron4 3 1The program is being used for weather analysis and forecast and also enables broadcast meteorology students to publish their forecasts and make them visible to people outside the classroom. “In the past, students would have written their forecasts and only their professor would see it,” said Professor Eicher. “Now the class has a clear purpose. Student meteorologists use Lynx to prepare weather analyses and forecasts and publish the results to the ERAU website using the Baron Digital Content Manager (DCM) portal.” 

While not a part of Lynx, the DCM is a web portal that communicates with Lynx. Using the DCM, meteorologists can update forecasts remotely and publish them across mobile platforms and websites. It is accessible to anyone who has credentials: students can log in from their home, lab, or class and enter the data. The DCM forecast builder feature allows users to populate their forecast, select weather graphics associated with specific forecast conditions using a spreadsheet-like form for the data entry, and publish them to the ERAU website. The forecast graphics and the resulting format are predefined during system setup.

On weekends, holiday breaks, or summer vacation, the DCM can be set to revert to the National Weather Service (NWS) forecast, solving the problem of what to do if students are not there to issue a forecast. Eicher considers this a feature that would be extremely useful for any university, because it means a current forecast will always appear on the website. According to Professor Eicher, “The ability to update the forecast via our web portal provided a solution for a need that had been unmet for five years or more.”

Baron4 3 2

Teaching Assistant Michelle Hughes uses Lynx to prepare weather analyses and forecasts and publish to the ERAU website.

In general, Eicher has found a lack of good real time weather instructional material, so he has turned to the Lynx program to develop better teaching tools. In addition to the original broadcast meteorology course, he and other instructors are also using the program for aviation weather and tropical meteorology classes. He anticipates it will soon be used to develop instructional graphics for an introduction to meteorology course. For example, Lynx will allow instructors to move beyond just a still image of information on upper level winds that show current wind patterns and then animate the winds with moving arrows. This type of animation clearly illustrates conditions and highlights areas where attention should be focused.

Baron4 3 3

ERAU is also using the program to develop other high quality instructional materials, including animated graphics that can be used to explain important regional and national weather events, for example, the recent California wildfires.

Positive feedback for new teaching tool

ERAU faculty and administration are extremely pleased with the availability of the new teaching tool for broadcast and meteorology students, and student pilots. Located in a broadcast studio that is part of the meteorology computer lab, Baron Lynx is accessible to the entire meteorology faculty and students, with output connected to adjacent classrooms. Enrollment in broadcast meteorology classes has more than doubled since ERAU obtained these new tools.

Support and training on the product have been provided at a high level. The Baron technical support staff is used to supporting televisions stations 24/7/365, so were not thrown off by students calling them on a Saturday afternoon with questions on how to produce graphics for their forecasts. The students showed off their new knowledge on a live Facebook stream the day before Thanksgiving on travel weather.

Eicher also gave high grades to the staff training provided. “The staff person brought in to train me on use of the program actually assisted with teaching the broadcast meteorology class, showing the students how to use the program directly.”

Customizable graphics product ideal for classroom environment

The customizable Lynx product enables the university to incorporate a range of other available weather data and create graphics ideal for the classroom environment.

The university is also looking into developing a range of other graphics for use on their new website, as well as creating more content using Lynx for educational purposes. Also in the planning stages is consideration of hooking in other camera sources like a roof/sky camera into the Lynx program, combined with weather data. “Word is getting out that we have a pretty unique opportunity,” concludes Professor Eicher.

It was a balmy 67-degree day in New York on March 15, which prompted the inevitable joke that since it’s warm outside, then climate change must be real. The wry comment was made by one of the speakers at the New York Academy of Science’s symposium Science for decision making in a warmer word: 10 years of the NPCC.

The NPCC is the New York City Panel on Climate Change, an independent body of scientists that advises the city on climate risks and resiliency. The symposium coincided with the release of the NPCC’s 2019 report, which found that in the New York City area extreme weather events are becoming more pronounced, high temperatures in summer are rising, and heavy downpours are increasing.

“The report tracks increasing risks for the city and region due to climate change,” says Cynthia Rosenzweig, co-chair of the NPCC and senior research scientist at Columbia University’s Earth Institute. “It continues to lay the science foundation for development of flexible adaptation pathways for changing climate conditions.”

...

http://www.iii.org/insuranceindustryblog/new-york-citys-climate-change-resiliency/

Thursday, 28 March 2019 19:52

NEW YORK CITY’S DISASTER RESILIENCY

Where do I start?

This is a conversation and situation I’ve had many times with different people, and it may feel familiar to some of you. You’ve been tasked with developing a BC/DR program for your organization. Assuming you have nothing or little in place, and what you do have is so out of date that you’re feeling that it would be wise to start fresh. The question invariably comes up: Where do I start?

Depending on your training or background this may start with a Business Impact Analysis (BIA) in order to prioritize and analyze your organization’s critical processes. If you have a security or internal audit background you may feel inclined to start with a Risk Assessment. You may have an IT background and feel that your application infrastructure is paramount, and you need a DR program immediately. If you’ve come from the emergency services or military, life and safety might be at the foremost in your mind and emergency response and crisis management might be the first steps. I’ve seen clients from big pharmaceuticals that need to prioritize their supply chain as their number one priority.

The reality is that although there are prescribed methodologies with starting points outlined in best practices by various institutes and organizations with expertise in the field, there is only one expert when it comes to your organization. You.

...

https://www.bcinthecloud.com/2019/03/business-continuity-methodology/

How do you create an insights-driven organization? One way is leadership. And we’d like to hear about yours.

Today, half of the respondents in Forrester’s Business Technographics® survey data report that their organizations have a chief data officer (CDO). A similar number report having a chief analytics officer (CAO). Many firms without these insights leaders report plans to appoint one in the near future. Advocates for data and analytics now have permanent voices at the table.

To better understand these leadership roles, Forrester fielded its inaugural survey on CDO/CAOs in the summer of 2017. Now we’re eager to learn how the mandates, responsibilities, and influence of data and analytics leaders and their teams have evolved in the past 18 months. Time for a new survey!

Take Forrester’s Data And Analytics Leadership Survey

Are you responsible for data and analytics initiatives at your firm? If so, we need your expertise and insights! Forrester is looking to understand:

  • Which factors drive the appointment of data and analytics leaders, as well as the creation of a dedicated team?
  • Which roles are part of a data and analytics function? How is the team organized?
  • What challenges do data and analytics functions encounter?
  • What is the working relationship between data and analytics teams and other departments?
  • What data and analytics use case, strategy, technology, people, and process support do these teams offer? How does the team prioritize data and analytics requests from stakeholders?
  • Which data providers do teams turn to for external data?
  • Which strategies do teams use to improve data and analytics literacy within the company?

Please complete our 20-minute (anonymous) Data and Analytics Leadership Survey. The results will fuel an update to the Forrester report, “Insights-Driven Businesses Appoint Data Leadership,“as well as other reports on the “data economy.”

For other research on data and analytics leadership, please also take a look at “Strategic CDOs Accelerate Insights-To-Action” and “Data Leaders Weave An Insights-Driven Corporate Fabric.”

As a thank-you, you’ll receive a courtesy copy of the initial report of the survey’s key findings.

Thanks in advance for your participation.

https://go.forrester.com/blogs/data-and-analytics-leaders-we-need-you/

Friday, 08 March 2019 16:27

Data And Analytics Leaders, We Need You!

Weather tools help Team Rubicon respond quicker and reduce risks

By Glen Denny, President, Enterprise Solutions, Baron Critical Weather Solutions

Team Rubicon is an international disaster response nonprofit with a mission of using the skills and experiences of military veterans and first responders to rapidly provide relief to communities in need. Headquartered in Los Angeles, California, Team Rubicon has more than 80,000 volunteers around the country ready to jump into action when needed to provide immediate relief to those affected by natural disasters.

More than 80 percent of the disasters Team Rubicon responds to are weather-related, including crippling winter storms, catastrophic hurricanes, and severe weather outbreaks – like tornadoes. While always ready to serve, the organization needed better weather intelligence to help them prepare and mitigate risks. After adopting professional weather forecasting and monitoring tools, operations teams were able to pinpoint weather hazards, track storms, view forecasts, and set up custom alerts. And the intelligence they gained made a huge difference in the organization’s response to Hurricanes Florence and Michael.

Team Rubicon relies on skills and experiences of military veterans and first responders

About 75 percent of Team Rubicon volunteers are military veterans, who find that their skills in emergency medicine, small-unit leadership, and logistics are a great fit with disaster response. It also helps with their ability to hunker down in challenging environments to get the job done. A further 20 percent of volunteers are trained first responders, while the rest are volunteers from all walks of life. The group is a member of National Voluntary Organizations Active in Disaster (National VOAD), an association of organizations that mitigate and alleviate the impact of disasters.

By focusing on underserved or economically-challenged communities, Team Rubicon seeks to make the largest impact possible. According to William (“TJ”) Porter, manager of operational planning, Team Rubicon’s core mission is to help those who are often forgotten or left behind; they place a special emphasis on helping under-insured and uninsured populations.

Porter, a 13-year Air Force veteran, law enforcement officer, world traveler, and former American Red Cross worker, proudly stands by Team Rubicon’s service principles, “Our actions are characterized by the constant pursuit to prevent or alleviate human suffering and restore human dignity – we help people on their worst day.”

Weather-related disasters pose special challenges

The help Team Rubicon provides for weather-related disasters spans the gamut, from removing trees from roadways, clearing paths for service vehicles, bringing in supplies, conducting search and rescue missions (including boating rescues), dealing with flooded out homes, mucking out after a flood, mold remediation, and just about anything else needed. While Team Rubicon had greatly expanded its equipment inventory in recent years to help do these tasks, the organization lacked the deep level of weather intelligence that could help them understand and mitigate risks – and keep their teams safe from danger.

That’s where Baron comes into the story. After learning of the impressive work Team Rubicon is doing at the Virginia Emergency Management Conference, a Baron team member struck up a conversation with Team Rubicon, asking if they had a need for detailed and accurate weather data to help them plan their efforts. Team Rubicon jumped at the opportunity and Baron ultimately donated access to its Baron Threat Net product. Key features allow users to pinpoint weather hazards by location, track storms, view forecasts and set up custom alerts, including location-based pinpoint alerting and standard alerts from the National Weather Service (NWS). The web portal weather monitoring system provides street level views and the ability to layer numerous data products. Threat Net also offers a mobile companion application that gives Team Rubicon access to real-time weather monitoring on the go.

This suited Team Rubicon down to the ground. “In years past, we didn’t have a good way to monitor weather,” explains Porter. “We went onto the NWS, but our folks are not meteorologists, and they don’t have that background to make crucial decisions. Baron Threat Net helped us understand risks and mitigate the risks of serious events. It plays a crucial role in getting teams in as quickly as possible so we can help the greatest number of people.”

New weather tools help with response to major hurricanes

Baron1The new weather intelligence tools have already had a huge impact on Team Rubicon’s operations. Take the example of how access to weather data helped Team Rubicon with its massive response to Hurricane Florence. A day or so before the hurricane was due to make landfall, Dan Gallagher, Enterprise Product Manager and meteorologist at Baron Services, received a call from Team Rubicon, requesting product and meteorological support. Individual staff had been using the new Baron Threat Net weather tools to a degree since gaining access to them, but the operations team wanted more training and support in the face of what looked like a major disaster barreling towards North Carolina, South Carolina, Virginia, and West Virginia.

Gallagher, a trained meteorologist with more than 18 years of experience in meteorological research and software development, quickly hopped on a plane, arriving at Team Rubicon’s National Operations Center in Dallas. His first task was to meet operational manager Porter’s request to help them guide reconnaissance teams entering the area. They wanted to place a reconnaissance team close to the storm – but not in mortal danger. Using the weather tools, Gallagher located a spot north of Wilmington, NC between the hurricane’s eyewall and outer rain bands that could serve as a safe spot for reconnaissance.

The next morning, Gallagher provided a weather briefing to ensure that operations staff had the latest weather intelligence. “I briefed them on where the storm was, where it was heading, the dangers that could be anticipated, areas likely to be most affected, and the hazards in these areas.”

Throughout the day, Gallagher conducted a number of briefings and kept the teams up to date as Hurricane Florence slowly moved overland. He also provided video weather briefings for the reconnaissance team in their car en route to their destination.

Another crew based in Charlotte was planning the safest route for trucking in supplies based on weather conditions. They wanted help in choosing whether to haul the trailer from Atlanta, GA or Alexandria, VA. “I was not there to make a recommendation on an action but rather to give them the weather information they need to make their decision,” explains Gallagher. “As a meteorologist, I know what the weather is, but they decide how it impacts their operation. As soon as I gave a weather update they could make a decision within seconds, making it possible for actions based on that decision.” Team Rubicon used the information Gallagher provided to select the Alexandria VA route; their crackerjack logistics team was then able to quickly make all the needed logistical arrangements.

In addition to weather briefings, Gallagher provided more detailed product training on Baron Threat Net, observed how the teams actually use the product, and learned how the real-time products were performing. He also got great feedback on other data products that might enhance Team Rubicon’s ability to respond to disasters.

Team Rubicon gave very high marks to the high-resolution weather/forecast model available in Baron Threat Net. They relied upon the predictive precipitation accumulation and wind speed information, as well as information on total precipitation accumulation (what has already fallen in the past 24 hours).

The wind damage product showing shear rate was very useful to Team Rubicon. In addition, the product did an excellent job of detecting rotation, including picking out the weak tornadoes spawned from the hurricane that were present in the outer rain bands of Hurricane Florence. These are typically very difficult to identify and warn people about, because they spin up quickly and are relatively shallow and weak (with tornado damage of EF0 or EF1 as measured on the Enhanced Fujita Scale). Gallagher had seen how well the wind damage product performed in larger tornado cases but was particularly gratified at how well it helped the team detect these smaller ones.

For example, Lauren Vatier of Team Rubicon’s National Incident Management Teamcommented that she had worked with Baron Threat Net before the Florence event, but using it so intensively made her more familiar with how to use the product and really helped cement her knowledge. “Before Florence I had not used Baron Threat Net for intel purposes. Today I am looking for information on rain accumulation and wind, and I’m looking ahead to help the team understand what the situation will look like in the future. It helps me understand and verify the actual information happening with the storm. I don’t like relying on news articles. Now I can look into the product and get accurate and reliable information.”

Vatier also really likes the ability to pinpoint information on a map showing colors and ranges. “You can click on a point and tell how much accumulation has occurred or what the wind speed is. The pinpointing is a valuable part of Baron Threat Net.” The patented Baron Pinpoint Alerting technology automatically sends notifications any time impactful weather approaches; alert types include severe storms and tornadoes; proximity alerts for approaching lightning, hail, snow and rain; and National Weather Service warnings. She concludes, “I feel empowered by the program. It ups my confidence in my ability to provide accurate information.”

Baron2TJ Porter concurred that Baron Threat Net helped Team Rubicon mobilize the large teams that deployed for Hurricane Florence. “It is crucial to put people on the ground and make sure they’re safe. Baron Threat Net helps us respond quicker to disasters. It also helps the strike teams ensure they are not caught up in other secondary or rapid onset weather events.”

Porter explains that the situation unit leaders actively monitor weather through the day using Baron Threat Net. “We are giving them all the tools at our disposal, because these are the folks who provide early warnings to keep our folks safe.”

Future-proofing weather data

Being on the ground with Team Rubicon during the Hurricane Florence disaster recovery response gave Baron’s Gallagher an unusual opportunity to discuss other ways Baron weather products could help respond to weather-related disasters. According to Porter, “We are looking to Baron to help us understand secondary events, like the extensive flooding resulting from Hurricane Florence, and to understand where these hazards are today, tomorrow, and the next day.”

In addition, Team Rubicon is committed to targeting those areas of greatest need, so they want to be able to layer weather information with other data sets, especially social vulnerability, including location of areas with uninsured or underinsured populations. Says Porter, “Getting into areas we know need help will shave minutes, hours, or even days off how long it takes to be there helping”.

In the storm’s aftermath

At the time this article was written, hundreds of Team Rubicon volunteers were deployed as part of Hurricane Florence response operations and later in response to Hurricane Michael. Their work has garnered them a tremendous amount of national appreciation, including a spotlight appearance during Game 1 of the World Series. T-Mobile used its commercial television spots to support the organization, also pledging to donate $5,000 per post-season home run plus $1 per Twitter or Instagram post using #HR4HR to Team Rubicon.

Baron’s Gallagher appreciated the opportunity to see in real time how customers use its products, saying “The experience helped me frame improvements we can develop that will positively affect our clients using Baron Threat Net.”

By Alex Winokur, founder of Axxana

 

Disaster recovery is now on the list of top concerns of every CIO. In this article we review the evolution of the disaster recovery landscape, from its inception until today. We look at the current understanding of disaster behavior and as a result the disaster recovery processes. We also try to cautiously anticipate the future, outlining the main challenges associated with disaster recovery.

The Past

The computer industry is relatively young. The first commercial computers appeared somewhere in the 1950s—not even seventy years ago. The history of disaster recovery (DR) is even younger. Table 1 outlines the appearance of the various technologies necessary to construct a modern DR solution.

AxxanaTable

Table 1 – Early history of DR technology development

 

From Magnetic Tapes to Data Networks

The first magnetic tapes for computers were used as input/output devices. That is, input was punched onto punch cards that were then stored offline to magnetic tapes. Later, UNIVAC I, one of the first commercial computers, was able to read these tapes and process their data. Later still, output was similarly directed to magnetic tapes that were connected offline to printers for printing purposes. Tapes began to be used as a backup medium only after 1954, with the

Axxana1

Figure 1: First Storage System - RAMAC


Although modern wide-area communication networks date back to 1974, data has been transmitted over long-distance communication lines since 1837 via telegraphy systems. These telegraphy communications have since evolved to data transmission over telephone lines using modems.
introduction of the mass storage device (RAMAC).

Modems were massively introduced in 1958 to connect United States air defense systems; however, their throughput was very low compared to what we have today. The FAA clustered system deployed communication that was designed for computers to communicate with their peripherals (e.g., tapes). Local area networks (LANs) as we now know them had not been invented yet.

Early Attempts at Disaster Recovery

It wasn’t until the 1970s that concerns about disaster recovery started to emerge. In that decade, the deployment of IBM 360 computers reached a critical mass, and they became a vital part of almost every organization. Until the mid-1970s, the perception was that if a computer failed, it would be possible to fail back to paper-based operation as was done in the 1960s. However, the wide-spread rise of digital technologies in the 1970s led to a corresponding increase in technological failures on one hand; while on the other hand, theoretical calculations, backed by real-world evidence, showed that switching back to paper-based work was not practical.

The emergence of terrorist groups in Europe like the Red Brigades in Italy and the Baader-Meinhof Group in Germany further escalated concerns about the disruption of computer operations. These left-wing organizations specifically targeted financial institutions. The fear was that one of them would try to blow up a bank’s data centers.

At that time, communication networks were in their infancy, and replication between data centers was not practical.

Parallel workloads. IBM came up with the idea to use the FAA clustering technology to build two adjoining computer rooms that were separated by a steel wall and had one node cluster in each room. The idea was to run the same workload twice and to be able to immediately fail over from one system to the other in case one system was attacked. A closer analysis revealed that in a case of a terror attack, the only surviving object would be the steel wall, so the plan was abandoned.

Hot, warm, and cold sites. The inability of computer vendors (IBM was the main vendor at the time) to provide an adequate DR solution made way for dedicated DR firms like SunGard to provide hot, warm, or cold alternate site. Hot sites, for example, were duplicates of the primary site; they independently ran the same workloads as the primary site, as communication between the two sites was not available at the time. Cold sites served as repositories for backup tapes. Following a disaster at the primary site, operations would resume at the cold site by allocating equipment, executing a restore from backup operations, and restarting the applications. Warms sites were a compromise between a hot site and a cold site. These sites had hardware and connectivity already established; however, recovery was still done by restoring the data from backups before the applications could be restarted.

Backups and high availability. The major advances in the 1980s were around backups and high availability. On the backup side, regulations requiring banks to have a testable backup plan were enacted. These were probably the first DR regulations to be imposed on banks; many more followed through the years. On the high availability side, Digital Equipment Corporation (DEC) made the most significant advances in LAN communications (DECnet) and clustering (VAXcluster).

The Turning Point

On February 26, 1993 the first bombing of the World Trade Center (WTC) took place. This was probably the most significant event shaping the disaster recovery solution architectures of today. People realized that the existing disaster recovery solutions, which were mainly based on tape backups, were not sufficient. They understood that too much data would be lost in a real disaster event.

SRDF. By this time, communication networks had matured, and EMC became the first to introduce a storage-to-storage replication software called Symmetrix Remote Data Facility (SRDF).

 

Behind the Scenes at IBM

At the beginning of the nineties, I was with IBM’s research division. At the time, we were busy developing a very innovative solution to shorten the backup window, as backups were the foundation for all DR and the existing backup windows (dead hours during the night) started to be insufficient to complete the daily backup. The solution, called concurrent copy, was the ancestor of all snapshotting technologies, and it was the first intelligent function running within the storage subsystem. The WTC event in 1993 left IBM fighting the “yesterday battles” of developing a backup solution, while giving EMC the opportunity to introduce storage-based replication and become the leader in the storage industry.

 

The first few years of the 21st century will always be remembered for the events of September 11, 2001—the date of the complete annihilation of the World Trade Center. Government, industry, and technology leaders realized then that some disasters can affect the whole nation, and therefore DR had to be taken much more seriously. In particular, the attack demonstrated that existing DR plans were not adequate to cope with disasters of such magnitude. The notion of local, regional, and nationwide disasters crystalized, and it was realized that recovery methods that work for local disasters don’t necessarily work for regional ones.

SEC directives. In response, the Securities Exchange Commission (SEC) issued a set of very specific directives in the form of the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S.” These regulations, still intact today, bind all financial institutions. The DR practices that were codified in the SEC regulations quickly propagated to other sectors, and disaster recovery became a major area of activity for all organizations relying on IT infrastructure.

The essence of these regulations is as follows:

  1. The economic stance of the United States cannot be compromised under any circumstance.
  2. Relevant financial institutions are obliged to correctly, without any data loss, resume operations by the next business day following a disaster.
  3. Alternate disaster recovery sites must use different physical infrastructure (electricity, communication, water, transportation, and so on) than the primary site.

Note that Requirements 2 and 3 above are somewhat contradictory. Requirement 2 necessitates synchronous replication to facilitate zero data loss, while Requirement 3 basically dictates long distances between sites—thereby making the use of synchronous replication impossible. This contradiction is not addressed within the regulations and is left to each implementer to deal with at its own discretion.

The secret to resolving this contradiction lies in the ability to reconstruct missing data if or when data loss occurs. The nature of most critical data is such that there is always at least one other instance of this data somewhere in the universe. The trick is to locate it, determine how much of it is missing in the database, and augment the surviving instance of the database with this data. This process is called data reconciliation, and it has become a critical component of modern disaster recovery. [See The Data Reconciliation Process sidebar.]

 

The Data Reconciliation Process

If data is lost as a result of a disaster, the database becomes misaligned with the real world. The longer this misalignment exists, the greater the risk of application inconsistencies and operational disruptions. Therefore, following a disaster, it is very important to align back the databases with the real world as soon as possible. This process of alignment is called data reconciliation.

The reconciliation process has two important characteristics:

  1. It is based on the fact that the data lost in a disaster exists somewhere in the real word, and thus it can be reconstructed in the database.
  2. The duration and complexity of the reconciliation is proportional to the recovery point objective (RPO); that is, it’s proportional to the amount of data lost.

One of the most common misconceptions in disaster recovery is that RPO (for example, RPO = 5) refers to how many minutes of data the organization is willing to lose. What RPO really means is that the organization must be able to reconstruct and reconsolidate (i.e., reconcile) that last five minutes of missing data. Note that the higher the RPO (and therefore, the greater the data loss), the longer the RTO and the costlier the reconciliation process. Catastrophes typically occur when RPO is compromised and the reconciliation process takes much longer.

In most cases, the reconciliation process is quite complicated, consisting of time-consuming processes to identify the data gaps and then resubmitting the missing transactions to realign the databases with real-world status. This is a costly, mainly manual, error-prone process that greatly prolongs the recovery time of the systems and magnifies risks associated with downtime.

 

The Present

The second decade of the 21st century has been characterized by new types of disaster threats, including sophisticated cyberattacks and extreme weather hazards caused by global warming. It is also characterized by new DR paradigms, like DR automation, disaster recovery as a service (DRaaS), and active-active configurations.

These new technologies are for the most part still in their infancy. DR automation tools attempt to orchestrate a complete site recovery through invocation of one “site failover” command, but they are still very limited in scope. A typical tool in this category is the VMware Site Recovery Manager (SRM). DRaaS attempts to reduce the cost of DR-compliant installation by locating the secondary site in the cloud. The new active-active configurations try to reduce equipment costs and recovery time by utilizing techniques that are used in the context of high availability; that is, to recover from a component failure rather than a complete site failure.

Disasters vs. Catastrophes

The following definitions of disasters and disaster recovery have been refined over the years to make a clear distinction between the two main aspects of business continuity: high availability protection and disaster recovery. This distinction is important because it crystalizes the difference between disaster recovery and a single component failure recovery covered by highly available configurations, and in doing so also accounts for the limitations of using active-active solutions for DR.

A disaster in the context of IT is either a significant adverse event that causes an inability to continue operation of the data center or a data loss event where recovery cannot be based on equipment at the data center. In essence, disaster recovery is a set of procedures aimed to resume operations following a disaster by failing over to a secondary site.

From a DR procedures perspective, it is customary to classify disasters into 1) regional disasters like weather hazards, earthquakes, floods, and electricity blackouts and 2) local disasters like local fires, onsite electrical failures, and cooling system failures.

Over the years, I have also noticed a third, independent classification of disasters. Disasters can also be classified as catastrophes. In principal, a catastrophe is a disastrous event where in the course of a disaster, something very unexpected happens that causes the disaster recovery plans to dramatically miss their service level agreement (SLA); that is, they typically exceed their recovery time objective (RTO).

When DR procedures go as planned for regional and local disasters, organizations fail over to a secondary site and resume operations within pre-determined parameters for recovery time (i.e., RTO) and data loss (i.e., RPO). The organization’s SLAs, business continuity plans, and risk management goals align with these objectives, and the organization is prepared to accept the consequent outcomes. A catastrophe occurs when these SLAs are compromised.

Catastrophes can also result from simply failing to execute the DR procedures as specified, typically due to human errors. However, for the sake of this article, let’s be optimistic and assume that DR plans are always executed flawlessly. We shall concentrate only on unexpected events that are beyond human control.

Most of the disaster events that have been reported in the news recently (for example, the Amazon Prime Day outage in July 2018 and the British Airways bank holiday outage in 2017) have been catastrophes related to local disasters. If DR could have been properly applied to the disruptions at hand, nobody would have noticed that there had been a problem, as the DR procedures were designed to provide almost zero recovery time and hence zero down time.

The following two examples provide a closer look at how catastrophes occur.

9/11 – Following the September 11 attack, several banks experienced major outages. Most of them had a fully equipped alternate site in Jersey City—no more than five miles away from their primary site. However, the failover failed miserably because the banks’ DR plans called for critical personnel to travel from their primary site to their alternate site, but nobody could get out of Manhattan.

A data center power failure during a major snow storm in New England – Under normal DR operations at this organization, the data was synchronously replicated to an alternate site. However, 90 seconds prior to a power failure at the primary site, the central communication switch in the area lost power too, which cut all WAN communications. As a result, the primary site continued to produce data for 90 seconds without replication to the secondary site; that is, until it experienced the power failure. When it finally failed over to the alternate site, 90 seconds of transactions were missing; and because the DR procedures were not designed to address recovery where data loss has occurred, the organization experienced catastrophic down time.

The common theme of these two examples is that in addition to the disaster at the data center there was some additional—unrelated—malfunction that turned a “normal” disaster into a catastrophe. In the first case, it was a transportation failure; in the second case, it was a central switch failure. Interestingly, both failures occurred to infrastructure elements that were completely outside the control of the organizations that experienced the catastrophe. Failure of the surrounding infrastructure is indeed one of the major causes for catastrophes. This is also the reason why the SEC regulations put so much emphasis on infrastructure separation between the primary and secondary data center.

Current DR Configurations

In this section, I’ve included examples of two traditional DR configurations that separate the primary and secondary center, as stipulated by the SEC. These configurations have predominated in the past decade or so, but they cannot ensure zero data loss in rolling disasters and other disaster scenarios, and they are being challenged by new paradigms such as that introduced by Axxana’s Phoenix. While a detailed discussion would be outside the scope of this article, suffice it to say that Axxana’s Phoenix makes it possible to avoid catastrophes such as those just described—something that is not possible with traditional synchronous replication models.

AxxanaFig2

Figure 2 – Typical DR configuration

 

Typical DR configuration. Figure 2 presents a typical disaster recovery configuration. It consists of a primary site, a remote site, and another set of equipment at the primary site, which serves as a local standby.

The main goal of the local standby installation is to provide redundancy to the production equipment at the primary site. The standby equipment is designed to provide nearly seamless failover capabilities in case of an equipment failure—not in a disaster scenario. The remote site is typically located at a distance that guarantees infrastructure independence (communication, power, water, transportation, etc.) to minimize the chances of a catastrophe. It should be noted that the typical DR configuration is very wasteful. Essentially, an organization has to triple the cost of equipment and software licenses—not to mention the increased personnel costs and the cost of high-bandwidth communications—to support the configuration of Figure 2.

AxxanaFig2

Figure 3 – DR cost-saving configuration

 

Traditional ideal DR configuration. Figure 3 illustrates the traditional ideal DR configuration. Here, the remote site serves both for DR purposes and high availability purposes. Such configurations are sometimes realized in the form of extended clusters like Oracle RAC One Node on Extended Distance. Although traditionally considered the ideal, they are a trade-off between survivability, performance, and cost. The organization saves on the cost of one set of equipment and licenses, but it compromises survivability and performance. That’s because the two sites have to be in close proximity to share the same infrastructure, so they are more likely to both be affected by the same regional disasters; at the same time, performance is compromised due to the increased latency caused by separating the two cluster nodes from each other.

AxxanaFig4

Figure 4 – Consolidation of DR and high availability configurations with Axxana’s Phoenix


True zero-data-loss configuration. Figure 4 represents a cost-saving solution with Axxana’s Phoenix. In case of a disaster, Axxana’s Phoenix provides a zero-data-loss recovery to any distance. So, with the help of Oracle’s high availability support (fast start failover and transparent application failover), Phoenix provides functionality very similar to extended cluster functionality. With Phoenix, however, it can be implemented over much longer distances and with much smaller latency, providing true cost savings over the typical configuration shown in Figure 3.

The Future

In my view, the future is going to be a constant race between new threats and new disaster recovery technologies.

New Threats and Challenges

In terms of threats, global warming creates new weather hazards that are fiercer, more frequent, and far more damaging than in the past—and in areas that have not previously experienced such events. Terror attacks are on the rise, thereby increasing threats to national infrastructures (potential regional disasters). Cyberattacks—in particular ransomware, which destroys data—are a new type of disaster. They are becoming more prolific, more sophisticated and targeted, and more damaging.

At the same time, data center operations are becoming more and more complex. Data is growing exponentially. Instead of getting simpler and more robust, infrastructures are getting more diversified and fragmented. In addition to legacy architectures that aren’t likely to be replaced for a number of years to come, new paradigms like public, hybrid, and private clouds; hyperconverged systems; and software-defined storage are being introduced. Adding to that are an increasing scarcity of qualified IT workers and economic pressures that limit IT spending. All combined, these factors contribute to data center vulnerabilities and to more frequent events requiring disaster recovery.

So, this is on the threat side. What is there for us on the technology side?

New Technologies

Of course, Axxana’s Phoenix is at the forefront of new technologies that guarantee zero data loss in any DR configuration (and therefore ensure rapid recovery), but I will leave the details of our solution to a different discussion.

AI and machine learning. Apart from Axxana’s Phoenix, the most promising technologies on the horizon revolve around artificial intelligence (AI) and machine learning. These technologies enable DR processes to become more “intelligent,” efficient, and predictive by using data from DR tests, real-world DR operations, and past disaster scenarios; in doing so, disaster recovery processes can be designed to better anticipate and respond to unexpected catastrophic events.These technologies, if correctly applied, can shorten RTO and significantly increase the success rate of disaster recovery operations. The following examples suggest only a few of their potential applications in various phases of disaster recovery:

  • They can be applied to improve the DR planning stage, resulting in more robust DR procedures.
  • When a disaster occurs, they can assist in the assessment phase to provide faster and better decision-making regarding failover operations.
  • They can significantly improve the failover process itself, monitoring its progress and automatically invoking corrective actions if something goes wrong.

When these technologies mature, the entire DR cycle from planning to execution can be fully automated. They carry the promise of much better outcomes than processes done by humans because they can process and better “comprehend” far more data in very complex environments with hundreds of components and thousands of different failure sequences and disaster scenarios.

New models of protection against cyberattacks. The second front where technology can greatly help with disaster recovery is on the cyberattack front. Right now, organizations are spending millions of dollars on various intrusion prevention, intrusion detection, and asset protection tools. The evolution should be from protecting individual organizations to protecting the global network. Instead of fragmented, per-organization defense measures, the global communication network should be “cleaned” of threats that can create data center disasters. So, for example, phishing attacks that would compromise a data center’s access control mechanisms should be filtered out in the network—or in the cloud— instead of reaching and being filtered at the end points.

Conclusion

Disaster recovery has come a long way—from naive tape backup operations to complex site recovery operations and data reconciliation techniques. The expenses associated with disaster protection don’t seem to go down over the years; on the contrary, they are only increasing.

The major challenge of DR readiness is in its return on investment (ROI) model. On one hand, a traditional zero-data-loss DR configuration requires organizations to implement and manage not only a primary site, but also a local standby and remote standby; doing so essentially triples the costs of critical infrastructure, even though only one third of it (the primary site) is utilized in normal operation.

On the other hand, if a disaster occurs and the proper measures are not in place, the financial losses, reputation damage, regulatory backlash, and other risks can be devastating. As organizations move into the future, they will need to address the increasing volumes and criticality of data. The right disaster recovery solution will no longer be an option; it will be essential for mitigating risk, and ultimately, for staying in business.

Thursday, 07 February 2019 18:15

Disaster Recovery: Past, Present, and Future

What Recent News Means for the Future

The compliance landscape is changing, necessitating changes from the compliance profession as well. A team of experts from CyberSaint discuss what compliance practitioners can expect in the year ahead.

Regardless of experience or background, 2019 will not be an easy year for information security. In fact, we realize it’s only going to get more complicated. However, what we are excited to see is the awareness that the breaches of 2018 have brought to information security – how more and more senior executives are realizing that information security needs to be treated as a true business function – and 2019 will only see more of that.

Regulatory Landscape

As constituents become more technology literate, we will start to see regulatory bodies ramping up security compliance enforcement for the public and private sectors. Along with the expansion of existing regulations, we will also see new cyber regulations come into fruition. While we may not see U.S. regulations similar to GDPR on a federal level in 2019, these conversations around privacy regulation will only become more notable. What we are seeing already is the expansion of the DFARS mandate to encompass all aspects of the federal government, going beyond the Department of Defense.

...

https://www.corporatecomplianceinsights.com/a-cybersecurity-compliance-crystal-ball-for-2019/

Today Forrester closed the deal to acquire SiriusDecisions.  

SiriusDecisions helps business-to-business companies align the functions of sales, marketing, and product management; Sirius clients grow 19% faster and are 15% more profitable than their peers. Leaders within these companies make more informed business decisions through access to industry analysts, research, benchmark data, peer networks, events, and continuous learning courses, while their companies run the “Sirius Way” based on proven, industry-leading models and frameworks.  Forrester Acquisition of SiriusDecisions

Why Forrester and SiriusDecisions? Forrester provides the strategy needed to be successful in the age of the customer; SiriusDecisions provides the operational excellence. The combined unique value can be summarized in a simple statement:

We work with business and technology leaders to develop customer-obsessed strategies and operations that drive growth. 

...

https://go.forrester.com/blogs/forrester-siriusdecisions/

Thursday, 03 January 2019 15:49

Forrester + SiriusDecisions

By Alex Becker, vice president and general manager of Cloud Solutions, Arcserve

If you’re like most IT professionals, your worst nightmare is waking up to the harsh reality that one of your primary systems or applications has crashed and you’ve experienced data loss. Whether caused by fire, flood, earthquake, cyber attack, programming glitch, hardware failure, human error, whatever – this is generally the moment that panic sets in.

While most IT teams understand unplanned downtime is a question of when, not if, many wouldn’t be able to recover business-critical data in time to avoid a disruption in business. According to new survey research commissioned by Arcserve of 759 global IT decision-makers, half revealed they have less than an hour to recover business-critical data before it starts impacting revenue, yet only a quarter cite being extremely confident in their ability to do so. The obvious question is why.

UNTANGLING THE KNOT OF 21ST CENTURY IT

Navigating modern IT can seem like stumbling through a maze. Infrastructures are rapidly transforming, spreading across different platforms, vendors and locations, but still often include non-x86 platforms to support legacy applications. With these multi-generational IT environments, businesses face increased risk of data loss and extended downtime caused by gaps in the labyrinth of primary and secondary data centers, cloud workloads, operating environments, disaster recovery (DR) plans and colocation facilities.

Yet, despite the complex nature of today’s environments, over half of companies resort to using two or more backup solutions, further adding to the complexity they’re attempting to solve. Never mind delivering on service level agreements (SLAs) or, in many cases, protecting data beyond mission-critical systems and applications.

It seems modern disaster recovery has become more about keeping the lights on than proactively avoiding the impacts of disaster. Because of this, many organizations develop DR plans to recover as quickly as possible during an outage. But, there’s just one problem: when was their most recent backup?  

WOULD YOU EAT DAY-OLD SUSHI?        

Day-old sushi is your backup. That’s right, if you’ve left your California Roll sitting out all night, chances are it’s the same age as your data if you do daily backups. One will cause a nasty bout of food poisoning and the other a massive loss of business data. Horrified or just extremely nauseated?

You may be thinking this is a bit dramatic, but if your last backup was yesterday, you’re essentially willing to accept more than 24 hours of lost business activity. For most companies, losing transactional information for this length of time would wreak havoc on their business. And, if those backups are corrupted, the ability to recover quickly becomes irrelevant.

While the answer to this challenge may seem obvious (backup more frequently), it’s far from simple. We must remember that in the quest to architect a simple DR plan, many organizations make the one wrong move that becomes their downfall: they use too many solutions, often trying to overcompensate for capabilities offered in one but not the others.

The other, and arguably more alarming reason, is a general lack of understanding about what’s truly viable with any given vendor. While many solutions today can get your organization back online in minutes, the key is minimizing the amount of business activity lost during an unplanned outage. It’s this factor that can easily be overlooked, and one that most solutions cannot deliver.

WHEN A BLIP TURNS BRUTAL

Imagine, for a moment, you have a power failure that brings down your systems and one of two scenarios plays out. In the first, you’re confident you can recover quickly, spinning up your primary application in minutes only to realize the data you’re restoring is hours - or even days old. Your manager is frantic and your sales team is furious as they stand by and watch every order from the past day go missing. In the second scenario, you’re confident you can recover quickly and spin up your primary application in minutes. This time, however, with data that was synced just a few seconds or minutes ago. This is the difference between a blip on the radar of your internal and external customers, and potentially hundreds of thousands (or more) in lost revenue, not to mention damage to you and your organization’s reputation which is right up there with financial loss.

For a variety of reasons ranging from perceived cost and complexity to limited network bandwidth and resistance to change, many shy away from deploying DR solutions that could very well enable them to avoid IT disasters. However, leveraging a solution that can keep your “blip” from turning brutal is easily the best kept secret of a DR strategy that works, and one that simply doesn’t.

ASK THESE 10 QUESTIONS TO MAKE SURE YOUR DR SOLUTION ISN’T TRICKING YOU

Many IT leaders agree that the volume of data lost during downtime (your recovery point objective, or RPO) is equally, if not more important than the time it takes to restore (your recovery time objective, or RTO). The trick is wading through the countless solutions that promise 100 percent uptime, but fall short in supporting stringent RPOs for critical systems and applications. These questions can help you evaluate whether your solution will make the cut or leave you in the cold:

  1. Does the solution include on-premises (for quick recovery of one or a few systems), remote (for critical systems at remote locations), private cloud you have already invested in, public cloud (Amazon/Azure) and purpose-built vendor cloud options? Your needs may vary and the solution should offer broad options to fit your infrastructure and business requirements.
  2. How many vendors would be involved in your end-to-end DR solution, including software, hardware, networking, cloud services, DR hypervisors and high availability? How many user interfaces would that entail? The patchwork-based solution from numerous vendors may increase complexity, time to manage and internal costs – and more importantly it will increase risks of bouncing between vendors if something goes wrong.
  3. Does the solution provide support and recovery for all generations of IT platforms, including non-x86, x86, physical, virtual and cloud instances running Windows and/or Linux?
  4. Does the solution offer both direct-to-cloud and hybrid cloud options? This ensures you can address any business requirement and truly safeguard your IT transformation.
  5. Does the solution deliver sub five-minute, rapid push-button failover? This allows you to continue accessing business-critical applications during a downtime event, as well as power on / run your environment with the click of a button.
  6. Does it support both rapid failover (RTOs) and RPOs of minutes, regardless of network complexity? When interruption happens, it’s vital that you can access business-critical applications with minimal disruption and effectively protect these systems by supporting RPOs of minutes.
  7. Does the solution provide automated incremental failback to bring back all applications and databases in their most current state to your on-premises environment?
  8. Does your solution leverage image-based technology to ensure no important data or configuration is left behind?
  9. Is your solution optimized for low bandwidth locations, being capable of moving large volumes of data to and from the cloud without draining bandwidth?
  10. In the event of a disaster, does the solution give you options for network connectivity, such as point to site VPN, site to site VPN and site to site VPN with IP takeover?

The true value you provide your organization and your customers is the peace of mind and viability of their business when a disaster or downtime event occurs. And even when its business as usual, you’ll be able to support a range of needs - such as migrating workloads to a public or private cloud, advanced hypervisor protection, and support of sub-minute RTOs and RPOs - across every IT platform, from UNIX and x86 to public and private clouds.

By keeping these questions in mind, you’ll be better prepared to challenge vendor promises that often cannot be delivered and to select the right solution to safeguard your entire IT infrastructure - when disaster strikes and when it doesn’t. No more day old sushi. No more secrets.

About the Author

As VP and GM of Arcserve Cloud Solutions, Alex Becker leads the company’s cloud and north american sales teams. Before joining Arcserve in April 2018, Alex served in various sales and leadership positions at ClickSoftware, Digital River, Fujitsu Consulting, and PTC.

Ah, Florida. Home to sun-washed beaches, Kennedy Space Center, the woeful Marlins – and one of the most costly tort systems in the country.

A significant driver of these costs is Florida’s “assignment of benefits crisis.”

Today the I.I.I. published a report documenting what the crisis is, how it’s spreading and how it’s costing Florida consumers billions of dollars. You can download and read the full report, “Florida’s assignment of benefits crisis: runaway litigation is spreading, and consumers are paying the price,” here.

An assignment of benefits (AOB) is a contract that allows a third party – a contractor, a medical provider, an auto repair shop – to bill an insurance company directly for repairs or other services done for the policyholder.

...

http://www.iii.org/insuranceindustryblog/study-florida-assignment-benefits-crisis-is-spreading-and-is-costing-consumers-billions-dollars/

Supply chain cartoon

It’s in your company’s best interest not to overlook disaster recovery (DR). If you’re hit with a cyberattack, natural disaster, power outage or any sort of other unplanned disturbance that could potentially threaten your business  ̶  you’ll be happy you had a DR plan in place.

It’s important to remember that your business is made up of a lot of moving parts, some of which may reside outside your building and under the control of others. And just because you have the foresight to prepare for the worst doesn’t mean the companies in your supply chain will also take the same precautions.

Verify that all participants within your supply chain have DR and business continuity plans in place, and that these plans are routinely tested and communicated to employees to ensure they can hold up their end of the supply chain in the event of a disaster. If you don’t, the wheels might just fall off your DR plan.

Check out more IT cartoons.

Free cloud storage is one of the best online storage deals – the price is right. 

Free cloud backup provides a convenient way to share content with friends, family and colleagues. Small businesses and individuals can take advantage of free online file storage to access extra space, for backup and recovery purposes or just store files temporarily.

Free cloud storage also tends to have paid options that are priced for individuals, small businesses, and large enterprises – so they will grow with you. The cloud storage pricing can vary considerably for these options.

The following are the best free cloud backup, with the associated advanced cloud storage options:

(Hint: some businesses have discovered that the most free cloud storage results from combining free cloud services:)

...

http://www.enterprisestorageforum.com/cloud-storage/best-free-cloud-storage-providers.html

Thursday, 15 November 2018 17:11

6 Best Free Cloud Storage Providers

These are the five major developments Jerry Melnick, president and CEO, SIOS Technology, sees in cloud, High Availability and IT service management, DevOps, and IT operations analytics and AI in 2019:

 

1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

Advances in technology will make the cloud substantially more suitable for critical applications. With IT staff now becoming more comfortable in the cloud, their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever failover clustering technology they currently use in their datacenters to protect the critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations the cloud. At the same time, cloud service providers will continue to advance their service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

Dynamic utilization of the cloud’s vast resources will enable IT to more effectively manage and orchestrate the services needed to support mission-critical applications. With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internetworking, now enables standby resources to be allocated dynamically only when needed, which will dramatically lower the cost of provisioning high availability and disaster recovery protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

Given its mission-critical nature, IT departments have historically chosen to implement SAP and SAP S4/HANA in enterprise datacenters, where the staff enjoys full control over the environment. As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud “Quick-start” Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates will become the standard for complex software and service deployments in private, public and hybrid clouds. These templates are wizard-based interfaces that employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy, and the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics and Artificial Intelligence Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will continue becoming more highly focused and purpose-built for specific needs, and these capabilities will increasingly be embedded in management tools. This much-anticipated capability will simplify IT operations, improve infrastructure and application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in high availability and disaster recovery solutions, as well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve. 

A COMSAT Perspective

Comsat1We’ve seen such happen all too often – large populations devastated by natural disasters through events such as earthquakes, tsunamis, fires and extreme weather. As we’ve witnessed in the past, devastation isn’t limited to natural occurrences, it can also be man-made. Whatever the event may be, natural or man-made, first responders and relief teams depend on reliable communication to provide those most affected the help they need. Dependable satellite communication (SATCOM) technology is the difference between life and death, expedient care or delay.

Devastation can occur in the business community, as well. For businesses and government entities that depend on the Internet of Things (IoT), as most do, organizations can face tremendous loss without a communication, or continuity, plan.

How do we stay constantly connected by land, sea or air, in vulnerable situations? Today’s teleport SATCOM technology provides reliable and affordable operational resiliency that is scalable and cost effective for anyone that depends on connectivity, including IOT.

Independent of the vulnerabilities of terrestrial land lines, today’s modern teleports provide a variety of voice and data options that include offsite data warehousing, Virtual Machine (M2M) access, and a secure, reliable connection to private networks and the World Wide Web.

Manufacturing, energy, transportation, retail, healthcare, financial services, smart cities, government and education are all closing the digital divide and becoming more and more dependent on connectivity to conduct business. They all require disaster recovery systems and reliable communications that only satellite communications can provide when land circuits are disrupted.

COMSAT, a Satcom Direct (SD) company, with the SD Data Center, has been working to provide secure, comprehensive, integrated connectivity solutions to help organizations stay connected, no matter the environment or circumstances. COMSAT’s teleports, a critical component in this process, have evolved to keep pace with changing communication needs in any situation.

Comsat2“In the past, customers would come to COMSAT to connect equipment at multiple locations via satellite using our teleports. Today, the teleports do so much more. They act as a network node, data center, meet-me point and customer support center. They are no longer a place where satellite engineers focus on antennas, RF, baseband and facilities. Today’s teleports are now an extension of the customer’s business ensuring they are securely connected when needed,” said Chris Faletra, director of teleport sales.

Comsat3COMSAT owns and operates two commercial teleport facilities in the United States. The Southbury teleport is located on the east coast, about 60 miles north of New York City. The Santa Paula teleport is located 90 miles north of Los Angeles on the west coast.

Each teleport has operated continuously for more than 40 years, since 1976. The teleports were built to high standards for providing life and safety services, along with a host of satellite system platforms from metrological data gathering to advanced navigation systems. As such, they are secure facilities connected to multiple terrestrial fiber networks and act as backup for each other through both terrestrial and satellite transmission pathways.

Both facilities are data centers equipped with advanced satellite antennas and equipment backed up with automated and redundant electrical power sources, redundant HVAC systems, automatic fire detection and suppression systems, security systems and 24/7/365 network operations centers. The teleports are critical links in delivering the complete connectivity chain.

“Our teleport facilities allow us to deliver global satellite connectivity. The teleports provide the link between the satellite constellation and terrestrial networks for reliable end-to-end connectivity at the highest service levels,” said Kevin West, chief commercial officer.

COMSAT was originally created by the Communications Satellite Act of 1962 and incorporated as a publicly traded company in 1963 with the initial purpose to serve as a public, federally funded corporation intended to develop a commercial and international satellite communications system.

For the past five decades, COMSAT has played an integral role in the growth and advancement of the industry, including being a founding member of Intelsat, operating the Marisat fleet network, and founding the initial operating system of Inmarsat from its two Earth stations.

While the teleports have been in operation for more than 40 years, the technology is continuously upgraded and enhanced to proactively support communication needs. For many years, the teleports provided point-topoint connectivity for voice and low-rate data.

Now data rates are being pushed to 0.5 Gbps with thousands of remotes on the network. The teleports also often serve as the Internet service provider (ISP). They have their own diverse fiber infrastructure to deliver gigabytes of connectivity versus the megabytes of connectivity that were required not so long ago.

All in the Family

In addition to growing the teleport’s capabilities through technological advancements, COMSAT is now a part of the SD family of companies, which further expands its offerings.

SD Land and Mobile, a division of Satcom Direct, offers a wide variety of satellite phone, mobile satellite Internet units and fixed satellite Internet units. SD Land and Mobile ensures SATCOM connectivity is available no matter how remote the location or how limited the cellular and data network coverage may be.

Data security is a critically important subject today. The SD Data Center, a wholly-owned data center by Satcom Direct, brings enterprise-level security capabilities to data transmissions in the air, on the ground and over water. The SD Data Center also provides industry compliant data center solutions, and business continuity planning for numerous industries including healthcare, education, financial, military, government and technology.

“Together, we deliver the infrastructure, products and data security necessary to keep you connected under any circumstance. We have a complete suite of solutions and capabilities for our clients,” said Rob Hill, business development.

Keeping Up with Market Needs and Trends

COMSAT’s pioneering spirit is reflected in the company’s ongoing analysis of, and adjustment to, current market needs and trends. The aero market is currently the largest growing market with new services and higher data rates being offered almost daily. Maritime, mobility and government markets are thriving as well.

No matter what direction the market is headed, COMSAT’s teleports and the SD family of companies will be ready to help clients weather the storm. comsat.com

To learn about SD Land & Mobile, head over to satcomstore.com

For additional information regarding the SD Data Center, access sddatacenter.com

COMSAT’s provision of teleport services are managed by Guy White, director of US teleports. As station director for COMSAT’s Southbury, Connecticut, and Santa Paula, California, teleports, Mr. White is responsible for the day-to-day operations and engineering of both facilities, including program planning, budget control, task scheduling, priority management, personnel matters, maintenance contract control, and other tasks related to teleport operations.

Mr. White began his career in the SATCOM industry in 1980 as a technician at the Southbury facility. Since then, he successively held the positions of senior technician, lead technician, maintenance technician and customer service engineer at Southbury, until he assumed the position of operations manager in 1992 at COMSAT’s global headquarters in Washington D.C. He returned to Southbury as station engineer in 1995 and has served as station director of the Southbury teleport since May of 2000. Mr. White’s responsibilities expanded to include the Santa Paula teleport in May of 2008.

Increase your business continuity (BC) knowledge and expertise by checking out this list of an even dozen top BC resources.

Business continuity is a sprawling, fast-changing, and challenging field. Fortunately, there are a lot of great resources out there that can help you in your drive to improve your knowledge and protect your organization.

In today’s post, I round up a “dynamic dozen” resources that you should be aware of in your role as a business continuity professional.

Some of these might be old friends and others might be new to you. In any case, you might find it beneficial to review the websites and other resources on this list as you update your strategies, perform risk assessments, and identify where to focus your future efforts.

Read on to become a master of disaster. And remember that the most important resource in any BC program is capable, knowledgeable, and well-educated people.

...

https://bcmmetrics.com/key-bc-resources/

By Cassius Rhue, Director of Engineering at SIOS Technology

All public cloud service providers offer some form of guarantee regarding availability, and these may or may not be sufficient, depending on each application’s requirement for uptime. These guarantees typically range from 95.00% to 99.99% of uptime during the month, and most impose some type of “penalty” on the service provider for falling short of those thresholds.

Most cloud service providers offer a 99.00% uptime threshold, which equates to about seven hours of downtime per month. And for many applications, those two-9’s might be enough. But for mission-critical applications, more 9’s are needed, especially given the fact that many common causes of downtime are excluded from the guarantee.

There are, of course, cost-effective ways to achieve five-9’s high availability and robust disaster recovery protection in configurations using public cloud services, either exclusively or as part of a hybrid arrangement. This article highlights limitations involving HA and DR provisions in the public cloud, explores three options for overcoming these limitations, and describes two common configurations for failover clusters.

Caveat Emptor in the Cloud

While all cloud service providers (CSPs) define “downtime” or “unavailable” somewhat differently, these definitions include only a limited set of all possible causes of failures at the application level. Generally included are failures affecting a zone or region, or external connectivity. All CSPs also offer credits ranging from 10% for failing to meet four-9’s of uptime to around 25% for failing to meet two-9’s of uptime.

Redundant resources can be configured to span the zones and/or regions within the CSP’s infrastructure, and that will help to improve application-level availability. But even with such redundancy, there remain some limitations that are often unacceptable for mission-critical applications, especially those requiring high transactional throughput performance. These limitations include each master being able to create only a single failover replica, requiring the use of the master dataset for backups, and using event logs to replicate data. These and other limitations can increase recovery time during a failure and make it necessary to schedule at least some planned downtime.

The more significant limitations involve the many exclusions to what constitutes downtime. Here are just a few examples from actual CSP service level agreements of what is excluded from “downtime” or “unavailability” that cause application-level failures resulting from:

  • factors beyond the CSP’s reasonable control (in other words, some of the stuff that happens regularly, such as carrier network outages and natural disasters)
  • the customer’s software, or third-party software or technology, including application software
  • faulty input or instructions, or any lack of action when required (in other words, the inevitable mistakes caused by human fallibility)
  • problems with individual instances or volumes not attributable to specific circumstances of “unavailability”
  • any hardware or software maintenance as provided for pursuant to the agreement

 

To be sure, it is reasonable for CSPs to exclude certain causes of failure. But it would be irresponsible for system administrators to use these as excuses, making it necessary to ensure application-level availability by some other means.

Three Options for Improving Application-level Availability

Provisioning resources for high availability in a way that does not sacrifice security or performance has never been a trivial endeavor. The challenge is especially difficult in a hybrid cloud environment where the private and public cloud infrastructures can differ significantly, which makes configurations difficult to test and maintain, and can result in failover provisions failing when actually needed.

For applications where the service levels offered by the CSP fall short, there are three additional options available based on the application itself, features in the operating system, or through the use of purpose-built failover clustering software.

The HA/DR options that might appear to be the easiest to implement are those specifically designed for each application. A good example is Microsoft’s SQL Server database with its carrier-class Always On Availability Groups feature. There are two disadvantages to this approach, however. The higher licensing fees, in this case for the Enterprise Edition, can make it prohibitively expensive for many needs. The more troubling disadvantage is the need for different HA/DR provisions for different applications, which makes ongoing management a constant (and costly) struggle.

The second option involves using uptime-related features integrated into the operating system. Windows Server Failover Clustering, for example, is a powerful and proven feature that is built into the OS. But on its own, WSFC might not provide a complete HA/DR solution because it lacks a data replication feature. In a private cloud, data replication can be provided using some form of shared storage, such as a storage area network. But because shared storage is not available in public clouds, implementing robust data replication requires using separate commercial or custom-developed software.

For Linux, which lacks a feature like WSFC, the need for additional HA/DR provisions and/or custom development is considerably greater. Using open source software like Pacemaker and Corosync requires creating (and testing) custom scripts for each application, and these scripts often need to be updated and retested after even minor changes are made to any of the software or hardware being used. But because getting the full HA stack to work well for every application can be extraordinarily difficult, only very large organizations have the wherewithal needed to even consider taking on the effort.

Ideally there would be a “universal” approach to HA/DR capable of working cost-effectively for all applications running on either Windows or Linux across public, private and hybrid clouds. Among the most versatile and affordable of such solutions is the third option: the purpose-built failover cluster. These HA/DR solutions are implemented entirely in software that is designed specifically to create, as their designation implies, a cluster of virtual or physical servers and data storage with failover from the active or primary instance to a standby to assure high availability at the application level.

These solutions provide, at a minimum, a combination of real-time data replication, continuous application monitoring and configurable failover/failback recovery policies. Some of the more robust ones offer additional advanced capabilities, such as a choice of block-level synchronous or asynchronous replication, support for Failover Cluster Instances (FCIs) in the less expensive Standard Edition of SQL Server, WAN optimization for enhanced performance and minimal bandwidth utilization, and manual switchover of primary and secondary server assignments to facilitate planned maintenance.

Although these general-purpose solutions are generally storage-agnostic, enabling them to work with storage area networks, shared-nothing SANless failover clusters are normally preferred based on their ability to eliminate potential single points of failure.

Two Common Failover Clustering Configurations

Every failover cluster consists of two or more nodes, and locating at least one of the nodes in a different datacenter is necessary to protect against local disasters. Presented here are two popular configurations: one for disaster recovery purposes; the other for providing both mission-critical high availability and disaster recovery. Because high transactional performance is often a requirement for highly available configurations, the example application is a database.

The basic SANless failover cluster for disaster recovery has two nodes with one primary and one secondary or standby server or server instance. This minimal configuration also requires a third node or instance to function as a witness, which is needed to achieve a quorum for determining assignment of the primary. For database applications, replication to the standby instance across the WAN is asynchronous to maintain high performance in the primary instance.

The SANless failover cluster affords a rapid recovery in the event of a failure in the primary, making this basic DR configuration suitable for many applications. And because it is capable of detecting virtually all possible failures, including those not counted as downtime in public cloud services, it will work in a private, public or hybrid cloud environment.

For example, the primary could be in the enterprise datacenter with the secondary deployed in the public cloud. Because the public cloud instance would be needed only during planned maintenance of the primary or in the event of its failure—conditions that can be fairly quickly remedied—the service limitations and exclusions cited above may well be acceptable for all but the most mission-critical of applications.

This three-node SANless failover cluster has one active and two standby server instances, making it capable of handling two concurrent failures with minimal downtime and no data lossThe figure shows an enhanced three-node SANless failover cluster that affords both five-9’s high availability and robust disaster recovery protection. As with the two-node cluster, this configuration will also work in a private, public or hybrid cloud environment. In this example, servers #1 and #2 are located in an enterprise datacenter with server #3 in the public cloud. Within the datacenter, replication across the LAN can be fully synchronous to minimize the time it takes to complete a failover and, therefore, maximize availability.

When properly configured, three-node SANless failover clusters afford truly carrier-class HA and DR. The basic operation is application-agnostic and works the same for Windows or Linux. Server #1 is initially the primary or active instance that replicates data continuously to both servers #2 and #3. If it experiences a failure, the application would automatically failover to server #2, which would then become the primary replicating data to server #3.

Immediately after a failure in server #1, the IT staff would begin diagnosing and repairing whatever caused the problem. Once fixed, server #1 could be restored as the primary with a manual failback, or server #2 could continue functioning as the primary replicating data to servers #1 and #3. Should server #2 fail before server #1 is returned to operation, as shown, server #3 would become the primary. Because server #3 is across the WAN in the public cloud, data replication is asynchronous and the failover is manual to prevent “replication lag” from causing the loss of any data.

With SANless failover clustering software able to detect all possible failures at the application level, it readily overcomes the CSP limitations and exclusions mentioned above, and makes it possible for this three-node configuration to be deployed entirely within the public cloud. To afford the same five-9’s high availability based on immediate and automatic failovers, servers #1 and #2 would need to be located within a single zone or region where the LAN facilitates synchronous replication.

For appropriate DR protection, server #3 should be located in a different datacenter or region, where the use of asynchronous replication and manual failover/failback would be needed for applications requiring high transactional throughput. Three-node clusters can also facilitate planned hardware and software maintenance for all three servers while providing continuous DR protection for the application and its data.

By offering multiple, geographically-dispersed datacenters, public clouds afford numerous opportunities to improve availability and enhance DR provisions. And because SANless failover clustering software makes effective and efficient use of all compute, storage and network resources, while also being easy to implement and operate, these purpose-built solutions minimize all capital and operational expenditures, resulting in high availability being more robust and more affordable than ever before.

# # #

About the Author

Cassius Rhue is Director of Engineering at SIOS Technology, where he leads the software product development and engineering team in Lexington, SC. Cassius has over 17 years of software engineering, development and testing experience, and a BS in Computer Engineering from the University of South Carolina. 

Speed up recovery process, improve quality and add to contractor credibility

 

By John Anderson, FLIR

Thermal imaging tools integrated with moisture meters can speed up the post-hurricane recovery process, improve repair quality, and add to contractor credibility. A thermal imaging camera can help you identify moisture areas faster and can lead to more accurate inspections with fewer call backs for verification by insurance companies. Many times, a good thermal image sent via email may be sufficient documentation to authorize additional work, leading to improved efficiency in the repair process.

Post-event process

Contractors need to be able to evaluate water damage quickly and accurately after a hurricane or other storm event. This can be a challenge using traditional tools, especially pinless (non-invasive) moisture meters that offer a nondestructive measurement of moisture in wood, concrete and gypsum. Operating on the principle of electrical impedance, pinless moisture meters read wood using a scale of 5 to 30 percent moisture content (MC); they read non-wood materials on a relative scale of 0 to 100 percent MC. [1] While simple to use, identifying damage with any traditional moisture meter alone is a tedious process, often requiring at least 30 to 40 readings. And the accuracy of the readings is only as good as the user’s ability to find and measure all the damaged locations.

Using a thermal imaging camera along with a moisture meter is much more accurate. These cameras work by detecting the infrared radiation emitted by objects in the scene. The sensor takes the energy and translates it into a visible image. The viewer sees temperatures in the image as a range of colors: red, orange and yellow indicate heat, while dark blue, black or purple signifies colder temperatures associated with evaporation or water leaks and damage. Using this type of equipment speeds up the process and tracks the source of the leak—providing contractors with a visual to guide them and confirm where the damage is located. Even a basic thermal imaging camera, one that is used in conjunction with a smart phone, is far quicker and more accurate at locating moisture damage than a typical noninvasive spot meter.

Infrared Guided Measurement (IGM)

An infrared (IR) thermal imaging camera paired with a moisture meter is a great combination. The user can find the cold spots with the thermal camera and then confirm moisture is present with the moisture meter. This combination is widely used today, prompting FLIR to develop the MR176 infrared guided measurement (IGM™) moisture meter. This all-in-one moisture meter and thermal imager allows contractors to use thermal imaging and take moisture meter readings for a variety of post-storm cleanup tasks. These include inspecting the property, preparing for remediation, and—during remediation— assessing the effectiveness of dehumidifying equipment. The tool can also be used down the road after remediation to identify leaks that may—or may not—be related to the hurricane.

During the initial property inspection, the thermal imaging camera visually identifies cold spots, which are usually associated with moisture evaporation. Without infrared imaging, the user is left to blindly test for moisture—and may miss areas of concern altogether.

While preparing for remediation, a tool that combines a thermal imaging camera with a relative humidity and temperature (RH&T) sensor can provide contractors with an easy way to calculate the equipment they will need for the project. This type of tool measures the weight of the water vapor in the air in grains per pound (GPP), relative humidity, and dew point values. Restoration contractors know how many gallons of water per day each piece of equipment can remove and, using the data provided by the meter, can determine the number of dehumidifiers needed in a given space to dry out the area.

The dehumidifiers reduce moisture and restores proper humidity levels, preventing the build-up of air toxins and neutralizing odors from hurricane water damage. Since the equipment is billed back to the customer or insurance company on a per-hour basis, contractors must balance the costs with the need for full area coverage.

During remediation, moisture meters with built-in thermal imaging cameras provide key data that contractors can use to spot check the drying process and equipment effectiveness over time. In addition, thermal imaging can be used to identify areas that may not be drying as efficiently as others and can guide the placement of drying equipment.

The equipment is also useful after the fact, if, for example, contractors are looking to identify the source of small leaks that may or may not be related to the damage from the hurricane. Using a moisture meter/thermal camera combination can help them track the location and source of the moisture, as well as determine how much is remaining.

Remodeling contractors who need to collect general moisture data can benefit from thermal imaging moisture meters, as well. For example, tracing a leak back to its source can be a challenge. A leak in an attic may originate in one area of the roof and then run down into different parts of the structure. A moisture meter equipped with a thermal imager can help them determine where the leak actually started by tracing a water trail up the roof rafter to the entrance spot.

Choosing the right technology

A variety of thermal imaging tools are available, depending upon whether the contractor is looking for general moisture information, or needs more precise information on temperature and relative humidity levels.

For example, the FLIR MR176 IGM™ moisture meter with replaceable hygrometer is an all-in-one tool equipped with a built-in thermal camera that can visually guide contractors to the precise spot where they need to measure moisture. An integrated laser and crosshair helps pinpoint the surface location of the issue found with the thermal camera. The meter comes with an integrated pinless sensor and an external pin probe, which gives contractors the flexibility to take either non-intrusive or intrusive measurements.

Coupled with a field-replaceable temperature and relative humidity sensor, and automatically calculated environmental readings, the MR176 can quickly and easily produce the right measurements during the hurricane restoration and remediation process. Users can customize thermal images by selecting which measurements to integrate, including moisture, temperature, relative humidity, dew point, vapor pressure and mixing ratio. They can also choose from several color palates, and use a lock-image setting to prevent extreme hot and cold temperatures from skewing images during scanning.

Also available is the FLIR MR160, which is a good tool for remodeling contractors looking for general moisture information, for example, pinpointing drywall damage from a washing machine, finding the source of a roof leak that is showing up in flooring or drywall, as well as locating ice dams. It has many of the features of the MR176 but does not include the integrated RH&T sensor.

Capturing images with a thermal camera builds contractor trust and credibility

Capturing images of hurricane-related damage with a thermal camera provides the type of documentation that builds contractor credibility and increases trust with customers. These images help customers understand and accept contractor recommendations. Credibility increases when customers are shown images demonstrating conclusively why an entire wall must be removed and replaced.

When customers experience a water event, proper photo documentation can bolster their insurance claims. The inclusion of thermal images will definitely improve insurance payout outcomes and speed up the process.

Post-storm cleanup tool for the crew

By providing basic infrared imaging functions, in combination with multiple moisture sensing technologies and the calculations made possible by the RH&T sensor, an imaging moisture meter such as the MR176 is a tool the entire remediation crew can carry during post-storm cleanup.

References

[1] Types of Moisture Meters, https://www.grainger.com/content/qt-types-of-moisture-meters-346, retrieved 5/29/18

Expert service providers update aging technology with minimal disruption

 

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

Aging power control and automation systems can carry risk, both in terms of downtime of mission-critical power systems, through reduced availability of replacement components and the knowledge to replace existing devices within. Of course, as components age, their risk of failure increases. Additionally, as technology advances, these same components are discontinued and become unavailable, and over time, service personnel lose the know‐how to support the older generation of products. At the same time, though, complete replacement of these aging systems can be extremely expensive, and may also require far more downtime or additional space than these facilities can sustain.

The solution, of course, is the careful maintenance and timely replacement of power control and automation system components. By replacing only some components of the system at any given time, customers can benefit from the new capabilities and increased reliability of current technology, all while uptime is maintained. In particular, expert service providers can provide in-house wiring, testing, and vetting of system upgrades before components even ship to customers, ensuring minimal downtime. These services are particularly useful in in healthcare facilities and datacenter applications, where power control is mission-critical and downtime is costly.

Automatic Transfer Switch (ATS) controllers and switchgear systems require some different types of maintenance and upgrades due to the differences in their components; however, the cost savings and improved uptime that maintenance and upgrades can provide are available to customers with either of these types of systems. The following maintenance programs and system upgrades can extend the lifetime of a power control system, minimize downtime in mission-critical power systems, and save costs.

Audits and Preventative Maintenance

Before creating a maintenance schedule or beginning upgrades, getting an expert technician into a facility to audit the existing system provides long-term benefits and provides the ability to prioritize. With a full equipment audit, a technician or application engineer who specializes in upgrading existing systems can look at an existing system and provide customers with a detailed migration plan for upgrading the system, in order of priority, as well as a plan for preventative maintenance.

Whenever possible, scheduled preventative maintenance should be performed by factory-trained service employees of the power control system OEM, rather than by a third party. In addition to having the most detailed knowledge of the equipment, factory-trained service employees can typically provide the widest range of maintenance services. While third-party testing companies may only maintain power breakers and protective relay devices, OEM service providers will also maintain the controls within the system.

Through these system audits and regular maintenance plans, technicians can ensure that all equipment is and remains operational, and they can identify components that are likely to become problematic before they actually fail and cause downtime in a mission-critical system.

Upgrades for ATS Control Systems with Minimal System Disruption

In ATS controller systems, control upgrades can provide customers with greater power monitoring and metering. In addition, replacing the controls for aging ATS systems ensures that all components of the system controls are still in production, and therefore will be available for replacement at a reasonable cost and turnaround time. In comparison, trying to locate out-of-production components for an old control package can lead to high costs and a long turnaround time for repairs.

The most advanced service providers minimize downtime during ATS control by pre-wiring the control and fully testing it within their own production facilities. When Russelectric performs ATS control upgrades, a pre-wired, fully-tested control package is shipped to the customer in one piece. The ATS is shut down only for as long as it takes to install the new controls retrofit, minimizing disruption.  

In addition, new technology also improves system usability, similar to making the switch from a flip phone to a smartphone. New ATS controls from Russelectric, for example, feature a sizeable color screen with historical data and alarm reporting. All of the alerts, details and information on the switch are easily accessible, providing the operator with greater information when it matters most. This upgrade also paves the way for optional remote monitoring through a SCADA or HMI system, further improving usability and ease of system monitoring.

Switchgear System upgrades

For switchgear systems, four main upgrades are possible in order to improve system operations and reliability without requiring a full system replacement: operator interface upgrades, PLC upgrades, breaker upgrades, and controls retrofits. Though each may be necessary at different times for different power control systems, all four upgrades are cost-effective, extend system lifespans, and minimize downtime.

Operator Interface Upgrades for Switchgear Systems

Similar to the ATS control upgrade, an operator interface (OI) or HMI upgrade for a switchgear power control system can greatly improve system usability, making monitoring easier and more effective for operators. This upgrade enables operators to see the system power flow, as well as to view alarms and system events in real time.

Also similar to ATS control upgrades, upgrading the OI also ensures that components will be in production and easily available for repairs. The greatest benefit, though, is providing operators real-time vision into system alerts without requiring them to walk through the system itself and search for indicator lights and alarms. Though upgrading this interface does not impact the actual system control, it provides numerous day-to-day benefits, enabling faster and easier troubleshooting and more timely maintenance.

Upgrades to PLC and Communication Hardware without Disrupting Operations

Many existing systems utilize legacy or approaching end-of-life PLC architecture. PLC upgrades allow for upgrading a switchgear control system to the newest technology with minimal program changes. Relying on expert OEM service providers for this process can also simplify the process of upgrading PLC and communications hardware, protecting customers’ investments in power control systems while extending noticeable system benefits.

A PLC upgrade by Russelectric includes all new PLC and communication hardware for the controls of the existing system, but maintains the existing logic and converts it for the latest technology. Upgrading the technology does not require new logic or operational sequences. As a result, the operations of the system remain unchanged and existing wiring is maintained. This greatly reduces the likelihood that the system will need to be fully recommissioned and minimizes downtime necessary for testing. Russelectric’s unique process of both converting existing logic and, as previously mentioned, testing components in their own production facility before sending out to the facility for installation, gives them a correspondingly unique ability to keep a system operational through the entire upgrade process.  In addition, Russelectric has developed some very unique processes for installation, using a sequence to systematically replace the PLC’s, replacing only one PLC at a time, and converting the communications from PLC to PLC as components are replaced.  This allows Russelectric to keep systems operational throughout the process. Russelectric’s experts minimize the risk of mission-critical power system downtime.

Breaker & Protective Relay Upgrades for Added Reliability and Protection

Breaker upgrades may often be necessary to ensure system protection and reliability, even through many years of normal use. Two different types of breaker modifications or upgrades are available for switchgear power control systems: breaker retrofill and breaker retrofit.  A retrofill breaker upgrade calls for an entirely new device in place of an existing breaker system. Retrofill upgrades maintain existing protections, lengthen service life, and provide added benefits of power metering and other add-on protections, like arc flash protections and maintenance of UL approvals.

Breaker retrofits can provide these same benefits, but they do so through a process of reengineering an existing breaker configuration. This upgrade requires a somewhat more labor-intensive installation, but provides generally the same end result. Whether a system requires a retrofit or retrofill upgrade is largely determined by the existing power breakers in a system.

For medium voltage systems, protective relay upgrades from single function solid state or mechanical protective devices to multifunction protective devices provide protection and reliability upgrades to a system.  Upgrading to multifunction protective relays provide enhanced protection, lengthen service life of a system, and provide added benefits of power metering, communications and other add-on protections, like arc flash protections.

Russelectric prewires and tests new doors with the new protective devices ready for installation.  This allows for minimal disruption to a system and allows for easy replacement.   

Controls Retrofits Revive Aging Systems

For older switchgear systems that predate PLC controls, one of the most effective upgrades for extending system life and serviceability is a controls retrofit. This process includes a fully new control interior, interior control panels, and doors. This enables customers to replace end-of-life components, update to the latest control equipment and sequence standards, and access benefits of visibility described above for OI upgrades. 

The major consideration and requirement is to maintain the switchgear control wiring interconnect location to eliminate the requirement for new control wiring between other switchgear, ATS’s, and generators.  In retrofitting controls rather than replacing, retrofitting the controls allows the existing wiring to be maintained and provides a major cost savings to the system upgrade. 

Just as with ATS controls retrofits, Russelectric builds the control panels and doors within their own facilities and simulate non-controls components from the customer’s system that are not being replaced. In doing so, technicians can fully test the retrofit before replacing the existing controls. What’s more, Russelectric can provide customers with temporary generators and temporary control panels so that the existing system can be strategically upgraded, one cubicle at a time, while maintaining a fully operational system.

Benefits of an Expert Service Provider

As described throughout this article, relying on expert OEM service providers like Russelectric amplifies the benefits of power control system upgrades. With the right service provided at the right time by industry experts, mission-critical power control systems, like those in healthcare facilities and datacenters, can be upgraded with a minimum of downtime and costs. OEMs are often the greatest experts on their own products, with access to all of the drawings and documentation for each product, and are therefore most able to perform maintenance and upgrades in the most effective and efficient manner.

Some of the most important cost-saving measures for power control system upgrades can only be achieved by OEM service providers. For example, maintaining existing interconnect control wiring between power equipment and external equipment provides key cost savings, as it eliminates the need for electrical contractors in installing a new system. Given that steel and copper substructure hardware can greatly outlast control components, retrofitting these existing components can also provide major cost savings. Finally, having access to temporary controls or power sources, pre-tested components, and the manufacturer’s component knowledge all helps to practically eliminate downtime, saving costs and removing barriers to upgrades. By upgrading a power control system with an OEM service provider, power system customers with mission-critical power systems gain the latest technology without the worry of downtime and huge costs associated with full system replacement.

This document gives guidelines for monitoring hazards within a facility as a part of an overall emergency management and continuity programme by establishing the process for hazard monitoring at facilities with identified hazards.

It includes recommendations on how to develop and operate systems for the purpose of monitoring facilities with identified hazards. It covers the entire process of monitoring facilities.

This document is generic and applicable to any organization. The application depends on the operating environment, the complexity of the organization and the type of identified hazards.

...

https://www.iso.org/standard/67159.html

By GREG SPARROW

In the wake of the recent Facebook and Cambridge Analytica scandal, data and personal privacy matters have come to the forefront of consumer’s minds. When an organization like Facebook falls into trouble, big data is often blamed, but IS big data actually at fault? When tech companies utilize and contract with third party data mining companies aren’t these data collection firms doing exactly what they were designed to do?

IBM markets its Watson as a way to get closer to knowing about consumers; however, when it does just that, it is perceived as an infringement on privacy. In lieu of data privacy and security violations, companies have become notorious for pointing the finger elsewhere. Like any other scapegoat, big data has become an easy way out; a chance for the company to appear to side with, and support the consumer. Yet, many are long overdue in making changes that actually do protect and support the customer and now find themselves needing to attempt to earn back lost consumer trust. Companies looking to please their customers, publicly agree that big data is the issue but behind the scenes may be doing little or nothing to change how they interact with these organizations. By pushing the blame to these data companies, they redirect the problem, holding their company and consumers as victims of something beyond their control.

For years, data mining has been used to help companies better understand their customers and market environment. Data mining is a means to offer insights from business to buyer or potential buyer. Before companies and resources like Facebook, Google, and IBM’s Watson existed, customers knew very little about their personal data. More recently, the general public has begun to understand what data mining actually is, how it is used, and be aware of the data trail they leave through their online activities.

Hundreds of articles have been written surrounding data privacy, additional regulations to protect individual’s data rights have been proposed, and some even signed into law. With the passing of new legislation pertaining to data, customers are going as far as to file law suits against companies that may have been storing personal identifiable information against their knowledge or without proper consent.

State regulations have increasingly propelled the data privacy interest, calling for what some believe might develop into national privacy law. Because of this, organizations are starting to take notice and thus have begun implementing policy changes to protect their organization from scrutiny. Businesses are taking a closer look at the changing trends within the marketplace, as well as the growing awareness from the public around how their data is being used. Direct consumer-facing brands need to be most mindful of the fact that they need to have appropriate security frameworks in place. Perhaps the issue amongst consumers is not the data collected, but how it is presented back to them or shared with others.

Generally speaking, consumers like content and products that are tailored to them. Many customers don’t mind data collection, marketing retargeting, or even promotional advertisements if they know that they are benefiting from them. We as consumers and online users often times willingly give up our information in exchange for free access and convenience, but have we thoroughly considered how that information is being used, brokered and shared? If we did, would we pay more attention to who and what we share online?

Many customers have expressed their unease when their data is incorrectly interpreted and relayed. Understandably so, they are irritated by irrelevant communications and become fearful when they lack trust in the organization behind the message. Is their sensitive information now in a databank with heightened risk for breach? When a breach or alarming infraction occurs, the customer, including prospective, has more concern.

The general public has become acquainted with the positive aspects of big data, to the point where they expect retargeted ads and customized communications. On the other hand, even when in agreeance to the terms and conditions, the consumer is quick to blame big data in a negative occurrence rather than the core brand they chose to trust their information to.

About Greg Sparrow:

GregSparrowGreg Sparrow, Senior Vice President and General Manger at CompliancePoint has over 15 years of experience with Information Security, Cyber Security, and Risk Management. His knowledge spans across multiple industries and entities including healthcare, government, card issuers, banks, ATMs, acquirers, merchants, hardware vendors, encryption technologies, and key management.

 

About CompliancePoint:

CompliancePoint is a leading provider of information security and risk management services focused on privacy, data security, compliance and vendor risk management. The company’s mission is to help clients interact responsibly with their customers and the marketplace. CompliancePoint provides a full suite of services across the entire life cycle of risk management using a FIND, FIX & MANAGE approach. CompliancePoint can help organizations prepare for critical need such as GDPR with project initiation and buy-in, strategic consulting, data inventory and mapping, readiness assessments, PIMS & ISMS framework design and implementation, and ongoing program management and monitoring. The company’s history of dealing with both privacy and data security, inside knowledge of regulatory actions and combination of services and technology solutions makes CompliancePoint uniquely qualified to help our clients achieve both a secure and compliant framework.

https://blog.sungardas.com/2018/10/machine-learning-cartoon-its-time-to-study-up-for-the-next-wave-of-innovation/

IT cartoon, machine learning

Successful companies understand they have to innovate to remain relevant in their industry. Few innovations are more buzzworthy than machine learning (ML).

The Accenture Institute for High Performance found that at least 40 percent of the companies surveyed were already employing ML to increase sales and marketing performance. Organizations are using ML to raise ecommerce conversion rates, improve patient diagnoses, boost data security, execute financial trades, detect fraud, increase manufacturing efficiency and more.

When asked which IT technology trends will define 2018, Alex Ough, CTO Architect at Sungard AS, noted that ML “will continue to be an area of focus for enterprises, and will start to dramatically change business processes in almost all industries.”

Of course, it’s important to remember that implementing ML in your business isn’t as simple as sticking an educator in front of a classroom of computers – particularly when companies are discovering they lack the skills to actually build machine learning systems that work at scale.

Machine learning, like many aspects of digital transformation, requires a shift in people, processes and technology to succeed. While that kind of change can be tough to stomach at some organizations, the alternative is getting left behind.

Check out more IT cartoons.

 

IT security cartoon

What is the price of network security? If your company understands we live in an interconnected world where cyber threats are continuously growing and developing, no cost is too great to ensure the protection of your crown jewels.

However, no matter how many resources you put into safeguarding your most prized “passwords,” the biggest threat to your company’s security is often the toughest to control – the human element.

It’s not that your employees are intentionally trying to sabotage the company. But, even if you’ve locked away critical information that can only be accessed by passing security measures in the vein of “Mission Impossible,” mistakes happen. After all, humans are only human.

The best course of action is to educate employees on the importance of having good cybersecurity hygiene. Inform them of the potential impacts of a cybersecurity incident, train them with mock phishing emails and other security scenarios, and hold employees accountable.

Retina scanners, complex laser grids and passwords stored in secure glass displays seem like adequate enough security measures. Unfortunately, employees don’t always get the memo that sensitive information shouldn’t be shouted across the office. Then again, they’re only human.

Check out more IT cartoons.

https://blog.sungardas.com/2018/09/it-security-cartoon-why-humans-are-cybersecuritys-biggest-adversary/

Complex system provided by Russelectric pioneers microgrid concept

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

PV RooftopA unique power control system for Quinnipiac University’s York Hill Campus, located in Hamden, Connecticut, ties together a range of green energy power generation sources with utility and emergency power sources. The powerful supervisory control and data acquisition (SCADA) system gives campus facilities personnel complete information on every aspect of the complex system. Initially constructed when the term microgrid had barely entered our consciousness, the system continues to grow as the master plan’s vision of sustainability comes into fruition.

Hilltop campus focuses on energy efficiency and sustainability

In 2006, Quinnipiac University began construction on its New York Hill campus, perched high on a hilltop with stunning views of Long Island Sound. Of course, the campus master plan included signature athletic, residence, parking, and activity buildings that take maximum advantage of the site. But of equal importance, it incorporated innovative electrical and thermal distribution systems designed to make the new campus energy efficient, easy to maintain, and sustainable. Electrical distribution requirements, including primary electrical distribution, emergency power distribution, campus-wide load shedding, and cogeneration were considered, along with the thermal energy components of heating, hot water, and chilled water.

The final design includes a central high-efficiency boiler plant, a high-efficiency chiller plant, and a campus-wide primary electric distribution system with automatic load shed and backup power. The design also incorporates a microturbine trigeneration system to provide electrical power while recovering waste heat to help heat and cool the campus. Solar and wind power sources are integrated into the design. The York Hill campus design engineer was BVH Integrated Services, PC, and Centerbrook Architects & Planners served as the architect. The overall campus project won an award for Best Sustainable Design from The Real Estate Exchange in 2011.

Implementation challenges for the complex system

The ambitious project includes numerous energy components and systems. In effect, it was a microgrid before the term was widely used. Some years after initial construction began, Horton Electric, the electrical contractor, brought in Russelectric to provide assistance and recommendations for all aspects of protection, coordination of control, and utility integration – especially protection and control of the solar, wind and combined heating and power (CHP) components. Russelectric also provided project engineering for the actual equipment and coordination between its system and equipment, the utility service, the emergency power sources, and the renewable sources. Alan Vangas, current VP at BVH Integrated Services, said that “Russelectric was critical to the project as they served as the integrator and bridge for communications between building systems and the equipment.”

Startup and implementation was a complex process. The power structure system infrastructure, including the underground utilities, had been installed before all the energy system components had been fully developed. This made the development of an effective control system more challenging. Some of the challenges arose from utility integration with existing on-site equipment, in particular the utility entrance medium voltage (MV) equipment that had been installed with the first buildings. Because it was motor-operated, rather than breaker-operated, paralleling of generator sets with the utility (upon return of the utility source after power interruption) was not possible in one direction. They could parallel the natural gas generator to the utility, but the generator was also used for emergency power, so they could not parallel from the utility back to their microgrid.

Unique system controls all power distribution throughout the campus

In response to the unique challenges, Russelectric designed, delivered, and provided startup for a unique power control system, and has continued to service the system since startup. The system controls all power distribution throughout the campus, including all source breakers – utility (15kV and CHP), wind, solar, generators, MV loop bus substations, automatic transfer switches (ATSs), and load controls.

As might be expected, this complex system requires a very complex load control system. For example, it has to allow the hockey rink chillers to run in the summer during an outage but maintain power to the campus. 

Here is the complete power control system lineup:

  • 15 kilovolt (kV) utility source that feeds a ring bus with 8 medium voltage/low voltage (MV/LV) loop switching substations for each building. Russelectric controls the open and close of the utility main switch and monitor’s the utility main’s health and protection of the utility main.
  • 15kV natural gas 2 megawatt (MW) Caterpillar CAT generator with switchgear for continuous parallel to the 15kV loop bus. Russelectric supplied the switchgear for full engine control and breaker operations to parallel with the utility and for emergency island operations.
  • One natural gas 750kW Caterpillar generator used for emergency backup only.
  • One gas-fired FlexEnergy micro turbine (Ingersoll Rand MT250 microturbine) for CHP distributed energy and utility tie to the LV substations. 
  • Control and distribution switchgear that controls the emergency, CHP, and utility. 
  • 12 ATSs for emergency power of 4 natural gas engines in each building. 
  • 25 vertical-axis wind turbines that generate 32,000 kilowatt-hours of renewable electricity annually. The wind turbines are connected to each of the LV substations. Russelectric controls the breaker output of the wind turbines and instructs the wind turbines when to come on or go off.
  • 721 rooftop photovoltaic panels gathering power from the sun, saving another 235,000 kilowatt-hours (kWh) per year. These are connected to each of the 3 dormitory LV substations. Russelectric controls the solar arrays’ breaker output and instructs the solar arrays when to come on or go off.

The system officially only parallels the onsite green energy generation components (solar, wind and micro turbine) with the utility, although they have run the natural gas engines in parallel with the solar in island mode for limited periods.

Since the initial installation, the system has been expanded to include additional equipment, including another natural gas generator, additional load controls, and several more ATSs.

SCADA displays complexity and detail of all the systems

Another feature of the Russelectric system for the project was the development of the Russelectric SCADA system, which takes the complexity and detail of all the systems and displays it for customer use. Other standard SCADA systems would not have been able to tie everything together – with one line diagrams and front views of equipment that provide the ability to visually see the entire system.

While the Russelectric products used are known for their quality and superior construction, what really made this project stand out is Russelectric’s ability to handle such an incredibly wide variety of equipment and sources without standardizing on the type of generator or power source used. Rather than requiring use of specific players in the market, the company supports any equipment the customer wishes to use – signing on to working through the challenges to make the microgrid work. This is critical to success when the task is controlling multiple traditional and renewable sources.

Combining business continuity and risk management into a single operational process is the most effective way to prepare for the worst.

By ROBERT SIBIK

Bowtie infographicCombining two seemingly unrelated entities to make a better, more useful creation is a keystone of innovation. Think of products like the clock radio and the wheeled suitcase, or putting meat between two slices of bread to make a sandwich, and you can see how effective it can be to combine two outwardly disparate things.

This viewpoint is useful in many scenarios, including in the business realm, especially when it comes to protecting a business from risk. Many companies treat risk management and business continuity as different entities under the same workflows, and that is a mistake; to be optimally effective, the two must be combined and aligned.

Mistaken Approaches

Business continuity traditionally starts with a business impact assessment, but many companies don’t go beyond that, making no tactical plan or strategic decisions on how to reduce impact once they have identified what could go wrong. The risk management process has been more mature, identifying various ways to treat problems, assigning it to someone, and trying to reduce the likelihood of the event occurring, but not doing much to reduce the impact of the event.

Organizations must move beyond simplistic goals of creating a business continuity plan using legacy business continuity/disaster recovery tools, or demonstrating compliance to a standard or policy using legacy governance, risk management and compliance software tools. Those approaches incorrectly move the focus to, “do we have our plans done?” or create a checklist mentality of, “did we pass the audit?” 

In addition to legacy approaches, benchmarking must be avoided, because it can provide misleading conclusions about acceptable risk and appropriate investment, and create a false sense of having a competitive advantage over others in the industry. Even companies in the same industry should have their own ideas about what constitutes risk, because risks are driven by business strategy, process, how they support customers, what they do, and how they do it.

Take the retail industry. Two organizations may sell the same basic product – clothing – but one sells luxury brands and the other sells value brands. The latter store’s business processes and strategies will focus on discounts and sales as well as efficiencies in stocking and logistics. The former will focus on personalized service and in-store amenities for shoppers. These two stores may exist in the same industry and sell the same thing, but they have vastly different types of merchandise, prices and clientele, which means their shareholder value and business risks will look very different from each other.

Businesses need to understand levels of acceptable risk in their individual organization and map those risks to their business processes, measuring them based on how much the business is impacted if a process is disrupted. By determining what risks are acceptable, and what processes create a risk by being aligned too closely to an important strategy or resource, leadership can make rational decisions at the executive level on what extent they invest in resilience – based not on theory, but on reality.

Creating an Integrated Approach with the Bowtie Model

Using the bowtie model, organizations can appropriately marry business continuity and risk management practices.

The bowtie model – based on the preferred neckwear of high school science teachers and Winston Churchill – uses one half of the bow to represent the likelihood of risk events and the other half to represent mitigation measures. The middle – the knot – represents a disaster event, which may comprise disruptions like IT services going down, a warehouse fire, a workforce shortage or a supplier going out of business.

To use this model, first, determine every possible disruption to your organization through painstaking analysis of your businesses processes. Then determine the likelihood of each disruption (the left part of the bow), as well as mitigating measures one can take to reduce the impact of the disruption should it occur (the right part of the bowtie).

Consider as an example the disruptive event of a building fire – the “knot” in this case. How likely is it? Was the building built in the 1800s and made of flammable materials like wood, or is it newer steel construction? Are there other businesses in the same building that would create a higher risk of fire, such as a restaurant? Do employees who smoke appropriately dispose of cigarettes in the right receptacle?

On the other half of the bowtie are the measures that could reduce the impact of a building fire, such as ensuring water sources and fire extinguishers throughout the building, testing sprinkler systems, having an alternate workspace to move to if part or all of the office is damaged during a fire, and so on.

The mitigating measures are especially key here, as they aren’t always captured in traditional insurance- and compliance-minded risk assessments. Understanding mitigation measures as well as the likelihood of risk events can change perspectives on how much risk an organization can take, because the organization then will understand what its business continuity and response capabilities are. Mitigation methods like being ready to move to an alternate workspace are more realistic than trying to prevent events entirely; at some point, you can accept the risk because you know how to address the impact.

A Winning Combination

Bob Sibik Fusion HeadshotWhere risk management struggles is where business continuity can shine: understanding what creates shareholder value, what makes an organization unique in its industry among its competitors, and how it distinguishes itself. Alternately, risk management brings a new perspective to the idea of business continuity by focuses on types of disruptions, their likelihoods, and how to prevent them.

To create a panoramic view of where an organization can be harmed if something bad happens, businesses must merge the concepts of business resilience (dependencies, impacts, incident management, and recovery) and risk management (assessment, controls, and effectiveness) and optimize them.

Bringing the two views together and performing holistic dependency mapping of entire ecosystem allows an organization to treat both as a single operational process, bringing data together to create actionable info (based on the “information foundation” the company has created about impacts to business operations that can result from a wide variety of disruptions and risks) to empower decisive actions and positive results.

Using the bowtie method to create this holistic view, companies get the best of both worlds and ensure they understand the possibilities of various disruptions, are taking steps to mitigate the possibilities of disasters, and have prepared their responses to disasters should they strike. This approach to risk management will help keep a business up and running and ensure greater value for shareholders – this year and in years to come.

Fusion♦♦♦

Robert Sibik is senior vice president at Fusion Risk Management. Sibik can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

By CONNOR COX, Director of Business Development, DH2i (http://dh2i.com)

In 2017, many major organizations—including Delta Airlines and Amazon Web Services (AWS)—experienced massive IT outages. Despite the reality of a growing number of internationally publicized outages like these, an Uptime Institute survey collected by 451 Research had some interesting findings. While the survey found that a quarter of participating companies experienced an unplanned data center outage in the last 12 months, close to one-third of companies (32 percent) still lack the confidence that they are totally prepared in their resiliency strategy should a disaster such as a site-wide outage occur in their IT environments. 

Cox1Much of this failure to prepare for the unthinkable can be attributed to three points of conventional wisdom when it comes to disaster recovery (DR): 

  • Comprehensive, bulletproof DR is expensive

  • Implementation of true high availability (HA)/DR is extremely complex, with database, infrastructure, and app teams involved

  • It’s very difficult to configure a resiliency strategy that adequately protects both new and legacy applications 

Latency is also an issue, and there’s also often a trade-off between cost and availability for most solutions. These assumptions can be true when you are talking about using traditional DR approaches for SQL Server. One of the more predominant approaches is the use of Always On Availability Groups, which provides management at the database level as well as replication for critical databases. Another traditional solution is Failover Cluster Instances, and you can also use virtualization in combination with one of the other strategies or on its own.

There are challenges to each of these common solutions, however, starting with the cost and availability tradeoff. In order to get higher availability for SQL Server, it often means much higher costs. Licensing restrictions can also come into play, since in order to do Availability Groups with more than a single database, you need to use Enterprise Edition of SQL Server, which can cause costs to rapidly rise. There are also complexities surrounding these approaches, including the fact that everything needs to be the same, or “like for like” for any Microsoft clustering approach. This can make things difficult if you have a heterogeneous environment or if you need to do updates or upgrades, which can incur lengthy outages.

But does this have to be so? Is it possible to flip this paradigm to enable easy, cost-effective DR for heavy-duty applications like SQL Server, as well as containerized applications? Fortunately, the answer is yes—by using an all-inclusive software-based approach, DR can become relatively simple for an organization. Let’s examine the how and why behind why I know this to be true.

Simplifying HA/DR

The best modern approach to HA/DR is one that encapsulates instances and allows you to move them between hosts, with almost no downtime. This is achieved using a lightweight Vhost—really just a name and IP address—in order to abstract and encapsulate those instances. This strategy provides a consistent connection string.

Crucial to this concept is built-in HA—which gives automated fault protection at the SQL Server instance level—that can be used from host to host locally, as well as DR from site to site. This can then be very easily extended to disaster recovery, creating in essence an “HA/DR” solution. The solution relies on a means of being able to replicate the data from site A to site B, while the tool manages the failover component of rehosting the instances themselves to the other site. This gives you many choices around data replication, affording the ability to select the most common array replication, as well as vSAN technology or Storage Replica.

Cox2So with HA plus DR built in, a software solution like this is set apart from the traditional DR approaches for SQL Server. First, it can manage any infrastructure, as it is completely agnostic to underlying infrastructure, from bare metal to virtual machines or even a combination. It can also be run in the cloud, so if you have a cloud-based workload that you want to provide DR for, it’s simple to layer this onto that deployment and be able to get DR capabilities from within the same cloud or even to a different cloud. Since it isn’t restricted in needing to be “like for like,” this can be done for Windows Server all the way back to 2008R2, or even on your SQL Server for Linux deployments, Docker containers, or SQL Server from 2005 on up. You can mix versions of SQL server or even the operating system within the same environment.

As far as implications for upgrades and updates, because you can mix and match, updates require the least amount of downtime. And when you think about the cost and complexity tradeoff that we see with the traditional solutions, this software-based tool breaks that because it facilitates high levels of consolidation. Since you can move instances around, users of this solution on average stack anywhere from 5 to 15 SQL Server instances per server with no additional licensing in order to do so. This understandably results in a massive consolidation of the footprint for management and licensing benefits, enabling a licensing savings of 25 to 60 percent on average.

There is also no restriction around the edition of SQL Server that you must use to do this type of clustering. So, you can do HA/DR with many nodes all on Standard Edition of SQL Server, which can create huge savings compared to having to buy premium software editions. If you’ve already purchased these licenses, you can use them later, reclaiming the licenses for future use.

Redefining DB Availability

How does this look in practice? You can, for example, install this tool on two existing servers, add a SQL Server instance under management, and very simply fail that instance over for local HA. You can add a third node that can be in a different subnet and any distance away from the first two nodes, and then move that instance over to the other site—either manually or as the result of an outage.

By leveraging standalone instances for fewer requirements and greater clustering ability, this software-based solution decouples application workloads, file shares, services, and Docker containers from the underlying infrastructure. All of this requires no standardization of the entire database environment on one version or edition of the OS and database, enabling complete instance mobility from any host to any host. In addition to instance-level HA and near-zero planned and unplanned downtime, other benefits include management simplicity, peak utilization and consolidation, and significant cost savings.

It all comes down to redefining database availability. Traditional solutions mean that there is a positive correlation between cost and availability, and that you’ll have to pay up if you want peak availability for your environment. These solutions are also going to be difficult to manage due to their inherent complexity. But you don’t need to just accept these facts as your only option and have your IT team work ridiculous hours to keep your IT environment running smoothly. You do have options, if you consider turning to an all-inclusive approach for the total optimization of your environment.

In short, the right software solution can help unlock huge cost savings and consolidation as well as management simplification in your datacenter. Unlike traditional DR approaches for SQL Server, this one allows you to use any infrastructure in anyw mix and be assured of HA and portability. There’s really no other way that you can unify HA/DR management for SQL Server, Windows, Linux, and Docker to enable a sizeable licensing savings—while also unifying disparate infrastructure across subnets for quick and easy failover.

 
Cox ConnorConnor Cox is a technical business development executive with extensive experience assisting customers transform their IT capabilities to maximize business value. As an enterprise IT strategist, Connor helps organizations achieve the highest overall IT service availability, improve agility, and minimize TCO. He has worked in the enterprise tech startup field for the past 5 years. Connor earned a Bachelor of Science in Business Administration from Colorado State University and was recently named a 2017 CRN Channel Chief.

 

    

As a Business Continuity practitioner with more than 20 years of experience, I have had the opportunity to see, review and create many continuity and disaster recovery plans. I have seen them in various shapes and sizes, from the meager 35 row spreadsheet to 1,000 plus pages in 3-ring binders. Reading these plans, in most cases, the planners’ intent is very evident – check the  “DR Plans done” box.

There are many different types of plans that are called in to play when a disruption occurs, these could be Emergency Health & Safety, Crisis Management Plans, Business Continuity, Disaster Recovery, Pandemic Response, Cyber Security Incident Response, and Continuity of Operations Plans (COOP) etc.

The essence of all these plans is to define “what” action is to be done, “when” it has to be performed and “who” is assigned the responsibility.

The plans are the definitive guide to respond to a disruption and have to be unambiguous and concise, while at the same time providing all the data needed for informed decision making.

...

https://www.ebrp.net/dr-plans-the-what-when-who/

Wednesday, 02 May 2018 14:15

DR Plans – The What, When & Who

By Tim Crosby

PREFACE: This article was written before ‘Meltdown’ and ‘Spectre’ were announced – two new critical “Day Zero” vulnerabilities that affect nearly every organization in the world. Given the sheer number of vulnerabilities identified in the last 12 months, one would think patch management would be a top priority for most organizations, but it is not the case. If the “EternalBlue” (MS17-010) and “Conflicker” (MS08-067) vulnerabilities are any indication, I have little doubt that I will be finding the “Meltdown” and “Spectre” exploits in my audit initiatives for the next 18 months or longer. This article is intended to emphasize the importance of timely software updates.

“It Only Takes One” – One exploitable vulnerability, one easily guessable password, one careless click, one is all it takes. So, is all this focus on cyber security just a big waste of time? The answer is NO. A few simple steps or actions can make an enormous difference for when that “One” action occurs.

The key step everyone knows, but most seem to forget is keeping your software and firmware updated. Outdated software provides hackers the footholds they need to break into your network as well as privilege escalation and opportunities for lateral movement. During a recent engagement, 2% of the targeted users clicked on a link with an embedded payload that provided us shell access into their network. A quick scan identified a system with a Solaris Telnet vulnerability that was easily exploitable and allowed us to establish a more secure position. The vulnerable Solaris system was a video projector to which no one gave a second thought, even though the firmware update had existed for years. Our scan thru this projector showed SMBv1 traffic so we scanned for “EternalBlue”; targeting 2008 servers due to the likelihood that they would have exceptions to the “Auto Logoff” policy and would be a great place to gather clear text credentials for administrators or helpdesk/privileged accounts. Several of these servers were older HP Servers with HP System Management Home Pages, some servers were running Apache Tomcat with default credentials (should ring a bell – the Equifax Argentina hack), a few running JBoss/JMX and even a system vulnerable with MS09-050.

The vulnerabilities make the above scenario possible have published exploits readily available in the form of free opensource software designed for penetration testing. We used Metasploit Framework to exploit a few of the “EternalBlue” vulnerable systems, followed the NotPetya script and downloaded clear text credentials with Mimikatz. Before our scans completed, we were on a Domain Controller with “System” privileges. The total time from “One careless click” to Enterprise Admin: less than 2 hours.

The key to our success?? Not our keen code writing ability, not a new “Day 0” vulnerability, not a network of super computers, not thousands of IOT devices working in unison, it wasn’t even a trove of payloads we purchased with Bitcoin on the Dark Web. The key was systems vulnerable to widely publicized exploits with widely available fixes in the form of updated software and/or patches. In short, outdated software. We used standard laptops running Kali or Parrot Linux operating systems with widely available free and/or opensource software, most of the which come preloaded on those Linux distributions.

The projector running Solaris is not uncommon, many office devices including printers and copiers have full Unix or Linux operating systems with internal hard drives. Most of these devices go unpatched and therefore make great pivoting opportunities. These devices also provide an opportunity to gather data (printed or scanned documents) and forward them to an external FTP site off hours, this is known as a store and forward platform. The patch/update for the system we referenced above has been available since 2014. Many of these devices also come with WiFi and/or Bluetooth enabled interfaces even when connected directly to the network via Ethernet, making them a target to bypass your firewalls and WPA2 Enterprise security. Any device that connects to your network, no matter how small or innocuous, needs to be patched and/or have software updates applied on a regular basis as well as undergo rigorous system hardening procedures including disabling unused interfaces and changing default access settings. This device with outdated software extended our attack long enough to identify other soft targets. Had it been updated/patched, our initial foothold could have vanished the first-time auto logoff occurred.

Before you scoff or get judgmental believing only incompetent or lazy network administrators or managers could allow this to happen, slow down and think. Where do the patch management statistics for your organization come from? What data do you rely on? Most organizations gather and report patching statistics based on data directly from their patch management platform. Fact – systems fall out of patch management systems or are never added for many reasons, such as: a GPO push failed, a switch outage during the process, systems that fall outside of the patch managers responsibility or knowledge (printers, network devices, video projector, VOIP Systems). Fact – Your spam filter may be filtering critical patch fail reports, this happens far more often than you might imagine.

A process outside of the patching system needs to verify every device is in the patch management’s system and that the system is capable of pushing all patches to all devices. This process can be as simple and cost effective as running and reviewing NMAP scripts on or as complex and automated as commercial products such as Tenable’s Security Center or BeyondTrust’s Retina that can be scheduled to run and report immediately following the scheduled patch updates. THIS IS CRITICAL! Unless you know every device connected to your network; wired, wireless or virtual and where it’s patch/version health status, there are going to be wholes in your security. At the end of this process, no matter what it looks like internally, the CISO/CIO/ISO should be able to answer the following:

  • Did the patches actually get applied?

  • Did the patches undo a previous workaround or code fix?

  • Did ALL systems get patched?

  • Are there any NEW critical or high-risk vulnerabilities that need to be addressed?

There are probably going to be devices that need to be manually patched, there is a very strong likelihood that some software applications are locked into vulnerable versions of Java, Flash or even Windows XP/2003/2000. So, there are devices that will be patched less frequently or not at all. Many organizations simply say, “That’s just how it is until manpower or technology changes - we just accept the risk”.

That may be a reasonable response for your organization, it all depends on your risk tolerance. What about Firewall or VLANs with ACL restriction for devices that can’t be patched or upgraded if you have a lower risk appetite?? Why not leverage virtualization to reduce the security surface area of the that business-critical application that needs to run on an old version of Java or only works on 2003 or XP? Published application technologies from Citrix, Microsoft, VMware or Phantosys fence the vulnerabilities into a small isolated window that can’t be accessed by the workstation OS. Properly implemented, the combination of VLANs/DMZs and Application Virtualization reduces the actual probability of exploit to nearly zero and creates an easy way to identify and log any attempts to access or compromise these vulnerable systems. Once again these are mitigating countermeasure when patching isn’t an option.

We will be making many recommendations to our clients including multi-factor authentication for VLAN access, changes to password length and complexity, and additional VLAN. However, topping the list of suggestions will be patch management and regular internal vulnerability scanning, preferably as the verification step for the full patch management cycle. Keeping your systems patched makes sure when someone makes a mistake and lets the bad guy or malware in – they have nowhere to go and a limited time to get there.

As an ethical hacker or penetration tester, one of the most frustrating things I encounter is spending weeks of effort to identify and secure a foothold on a network only to find myself stuck; I can’t escalate privileges, I can’t make the session persistent, I can’t move laterally, ultimately rendering my attempts unsuccessful. Though frustrating for me, this is the optimal outcome for our clients as it means they are being proactive about their security controls.

Frequently, hackers are looking for soft targets and follow the path of least resistance. To protect yourself, patch your systems and isolate those you can’t. By doing so, you will increase the level of difficulty, effort and time required rendering a pretty good chance they will move on to someone else. There is an old joke about two guys running from a bear, the punch line applies here as well – “I don’t need to be faster that the bear, just faster than you…”

Make sure ALL of your systems are patched, upgraded or isolated with mitigating countermeasure; thus, making you faster than the other guy who can’t outrun the bear.

About Tim Crosby:

Crosby TimTimothy Crosby is Senior Security Consultant for Spohn Security Solutions. He has over 30 years of experience in the areas of data and network security. His career began in the early 80s securing data communications as a teletype and cryptographic support technician/engineer for the United States Military, including numerous overseas deployments. Building on the skillsets he developed in these roles, he transitioned into network engineering, administration, and security for a combination of public and private sector organizations throughout the world, many of which required maintaining a security clearance. He holds industry leading certifications in his field, and has been involved with designing the requirements and testing protocols for other industry certifications. When not spending time in the world of cybersecurity, he is most likely found in the great outdoors with his wife, children, and grandchildren.

Migrating and managing your data storage in the cloud can offer significant value to the business. Start by making good strategic decisions about moving data to the cloud, and which cloud storage management toolsets to invest in.

Your cloud storage vendor will provide some security, availability, and reporting. But the more important your data is, the more you want to invest in specialized tools that will help you to manage and optimize it.

Cloud Storage Migration and Management Overview

First, know if you are moving data into an application computing environment or moving backup/archival data for long-term storage in the cloud.  Many companies start off with storing long-term backup data in the cloud, others with Office 365. Still others work with application providers who extend the application environment to the vendor-owned cloud, like Oracle or SAP. In all cases you need to understand storage costs and information security such as encryption. You will also need to decide how to migrate the data to the cloud.

...

http://www.enterprisestorageforum.com/storage-management/managing-cloud-storage-migration.html

Tuesday, 27 March 2018 05:11

Managing Cloud Storage Migration

Leveraging Compliance to Build Regulator and Customer Trust

Bitcoin and other cryptocurrencies continue to gain ground as investors buy in, looking for high returns, and as acceptance of it as payment takes hold. However, with such growth come risks and challenges that fall firmly under the compliance umbrella and must be addressed in a proactive, rather than reactive, manner.

Cryptocurrency Challenges

One of the greatest challenges faced by the cryptocurrency industry is its volatility and the fact that the cryptocurrency markets are, unlike mainstream currency markets, a social construct. Just as significantly, all cryptocurrency business is conducted via the internet, placing certain obstacles in the path of documentation. The online nature of cryptocurrency leads many, especially regulators, to remain dubious of its legitimacy and suspicious that it is used primarily for nefarious purposes, such as money-laundering and drug trafficking, to name a few.

This leaves companies that have delved into cryptocurrency with an onerous task: building trust among regulators and customers alike, with the ultimate goal of fostering cryptocurrency’s survival. From a regulatory standpoint, building trust involves not only setting policies and procedures pertaining to the vetting of customers and the handling of cryptocurrency transactions and trades, but also leveraging technology to document and communicate them to the appropriate parties. Earning regulators’ trust also means keeping meticulous records rendered legally defensible by technology. Such records should detail which procedures for vetting customers were followed; when, by whom and in what jurisdiction the vetting took place; and what information was shared with customers at every step of their journey.

On the customer side, records must document the terms of all transactions and the messages conveyed to customers throughout their journey. Records of what customers were told regarding how a company handles its cryptocurrency transactions and any measures it takes to ensure the legitimacy of activities connected with transactions should be maintained as well.

...

http://www.corporatecomplianceinsights.com/cryptocurrency-challenges-opportunities/

How to help your organization plan for and respond to weather emergencies

By Glen Denny, Baron Services, Inc.

Hospitals, campuses, and emergency management offices should all be actively preparing for winter weather so they can be ready to respond to emergencies. Weather across the country is varied and ever-changing, but each region has specific weather threats that are common to their area. Understanding these common weather patterns and preparing for them in advance is an essential element of an emergency preparedness plan. For each weather event, those responsible for organizational safety should know and understand these four important factors: location, topography, timing, and pacing.

In addition, be sure to understand the important terms the National Weather Service (NWS) uses to describe changing weather conditions. Finally, develop and communicate a plan for preparing for and responding to winter weather emergencies. Following the simple steps in the sample planning tool provided will aid you in building an action plan for specific weather emergency types.

Location determines the type, frequency and severity of winter weather

Denny1The type of winter weather experienced by a region depends in great part on its location, including proximity to the equator, bodies of water, mountains, and forests. These factors can shape the behavior of winter weather in a region, determining its type, frequency, and severity. Knowing how weather affects a region can be the difference in the number of lives saved and lives lost.

Winter weather can have a huge impact on a region’s economy. For example, in the first quarter of 2015, insurance claims for winter storm damage totaled $2.3 billion, according to the Insurance Information Institute, a New York-based industry association. One Boston-area insurance executive called it the worst first quarter of winter weather claim experience he’d ever seen. The statistics, quoted in an article that appeared in the Boston Globe, noted that most claims were concentrated in the Northeast, where winter storms had dumped 9 feet of snow in Greater Boston. According to the article, Mounting insurance claims are remnants of a savage winter, “That volume of claims was above longtime historic averages, and coupled with the recent more severe winters could prompt many insurance companies to eventually pass the costs on to consumers through higher rates.”

Denny2Every region has unique winter weather, and different ways to mitigate the damage. Northern regions will usually have some form of winter precipitation – but they also have the infrastructure to handle it. In these areas, there is more of a risk that mild events can become more dangerous because people are somewhat desensitized to winter weather. Sometimes, they ignore warnings and travel on the roads anyway. Planners should remember to issue continual reminders of just how dangerous winter conditions can be.

Areas of the Southwest are susceptible to mountain snows and extreme cold temperatures. These areas need warming shelters and road crews to deal with snow and ice events when they occur.

Denny3Any winter event in the Southeast can potentially become an extreme event, because organizations in this area do not typically have many resources to deal with it. It takes more time to put road crews in place, close schools, and shut down travel. There is also an increased risk for hypothermia, because people are not as aware of the potential dangers cold temperatures can bring. Severe storms and tornadoes can also happen during the winter season in the Southeast.

Figure 1 is a regional map of the United States. Table 1 outlines the major winter weather issues each region should consider and plan for.

Topography influences winter weather

Denny4Topography includes cities, rivers, and mountains Topographical features influence winter weather, because they help direct air flow causing air to rise, fall, and change temperature. Wide open spaces – like those found in the Central U.S. – will increase wind issues.

Timing has a major effect on winter weather safety

Denny5Knowing when a winter event will strike is one of the safety official’s greatest assets because it enables a degree of advance warning and planning. But even with early notification, dangerous road conditions that strike during rush hour traffic can be a nightmare. Snowstorms that struck Atlanta, GA and Birmingham, AL a few years ago occurred in the middle of the day without adequate warning or preparation and caused travel-related problems.

Pacing of an event is important – the speed with which it occurs can have adverse impacts

Denny6Storms that occur in a few hours can frequently catch people off guard and without appropriate preparation or advanced planning. In some regions, like the Northeast, people are so immune to winter weather that they ignore the slower, milder events. Many people think it is fine to be out on the roads with a little snowfall, but it will accumulate over time. It is not long before they are stranded on snowy or icy roads.

Denny7As part of considering winter event pacing, emergency planners should become familiar with the terms the National Weather Service (NWS) currently uses to describe winter weather phenomenon (snow, sleet, ice, wind chill) that affect public safety, transportation, and/or commerce. Note that for all advisories designated as a “warning,” travel will become difficult or impossible in some situations. For these circumstances, planners should urge people to delay travel plans until conditions improve.

A brief overview of NWS definitions appears on Table 2. For more detailed information, go to https://www.weather.gov/lwx/WarningsDefined.

Planning for winter storms

After hurricanes and tornadoes, severe winter storms are the “third-largest cause of insured catastrophic losses,” according to Dr. Robert Hartwig, immediate past president of the Insurance Information Institute (III), who was quoted in Property Casualty 360° online publication. “Most winters, losses from snow, ice and other freezing hazards total approximately $1.2 billion, but some storms can easily exceed that average.”

Given these figures, organizations should take every opportunity to proactively plan. Prepare your organization for winter weather. Have a defined plan and communicate it to all staff. The plan should include who is responsible for monitoring the weather, what information is shared and how. Identify the impact to the organization and show how you will maintain your facility, support your customers, and protect your staff.

Denny8Once you have a plan, be sure to practice it just as you would for any other crisis plan. Communicate the plan to others in the supply chain and transportation partners. Make sure your generator tank is filled and ready for service.

Denny9Implement your plan and be sure to review and revise it based on how events unfold and feedback from those involved.

Denny10A variety of tools are available to help prepare action plans for weather events. The following three figures are tools Baron developed for building action plans for various winter weather events.

Use these tools to determine the situation’s threat level, then adopt actions suggested for moderate and severe threats – and develop additional actions based on your own situation.

Weather technology assists in planning for winter events

A crucial part of planning for winter weather is the availability of reliable and detailed weather information to understand how the four factors cited affect the particular event. For example, Baron Threat Net provides mapping that includes local bodies of water and rivers along with street level mapping. Threat Net also provides weather pattern trends and expected arrival times along with their expected impact on specific areas. This includes 48-hour models of temperature, wind speed, accumulated snow, and accumulated precipitation. In addition to Threat Net, the Baron API weather solution can be used by organizations that need weather integrated into their own products and services.

To assist with the pacing evaluation, proximity alerts can forecast an approaching wintery mix and snow, and can be used along with NWS advisories. While these advisories are critical, the storm or event has to reach the NWS threshold for a severe weather event. By contrast, technology like proximity alerting is helpful – just because an event does not reach a NWS defined threshold does not mean it is not dangerous! Pinpoint alerting capabilities can alert organizations when dangerous storms are approaching. Current conditions road weather information covers flooded, slippery, icy, and snow covered conditions. The information can be viewed on multiple fixed and mobile devices at one time, including an operation center display, desktop display, mobile phone, and tablet.

An example is a Nor’easter storm that occurred in February 2017 along the east coast. The Baron forecasting model was accurate and consistent in the placement of the heavy precipitation, including the rain/snowfall line leading up to the event and throughout the storm. Models also accurately predicted the heaviest bands of snow, snow accumulation, and wind speed. Based on the radar image showing the rain to snow line slowly moving to the east the road conditions product displayed a brief spatial window where once the snow fell, roads were still wet for a very short time before becoming snow-covered, which is evident in central and southern NJ and southeastern RI.

Final thoughts on planning for winter weather

Denny11Every region within the United States will experience winter weather differently. The key is knowing what you are up against and how you can best respond. Considering the four key factors – location, topography, timing, and pacing – will help your organization plan and respond proactively.

Atkins Unbottling VolnerabilitiesGraphic2By Ed Beadenkopf, PE

As we view with horror the devastation wrought by recent hurricanes in Florida, South Texas, and the Caribbean, questions are rightly being asked about what city planners and government agencies can do to better prepare communities for natural disasters. The ability to plan and design infrastructure that provides protection against natural disasters is obviously a primary concern of states and municipalities. Likewise, federal agencies such as the Federal Emergency Management Agency (FEMA), the U.S. Army Corps of Engineers (USACE), and the U.S. Bureau of Reclamation cite upgrading aging water infrastructure as a critical priority.

Funding poses a challenge

Addressing water infrastructure assets is a major challenge for all levels of government. While cities and municipalities are best suited to plan individual projects in their communities, they do not have the funding and resources to address infrastructure issues on their own. Meanwhile, FEMA, USACE and other federal agencies are tasked with broad, complex missions, of which flood management and resiliency is one component.

Federal funding for resiliency projects is provided in segments, which inadvertently prevents communities from being able to address the projects entirely. Instead, funding must be divided into smaller projects that never address the entire issue. To make matters even more challenging, recent reports indicate that the White House plan for infrastructure investment will require leveraging a major percentage of funding from state and local governments and the private sector. 

Virtually, long-term planning is the solution

So, what’s the answer? How can we piece together an integrated approach between federal and local governments with segmented funding? Put simply, we need effective, long-term planning.

Cities can begin by planning smaller projects that can be integrated into the larger, federal resilience plan. Local governments can address funding as a parallel activity to their master planning. Comprehensive planning tools, such as the Atkins-designed City Simulator, can be used to stress test proposed resilience-focused master plans.

A master plan developed using the City Simulator technology is a smart document that addresses the impact of growth on job creation, water conservation, habitat preservation, transportation improvements, and waterway maintenance. It enables local governments to be the catalyst for high-impact planning on a smaller scale.

By simulating a virtual version of a city growing and being hit by climate change-influenced disasters, City Simulator measures the real impacts and effectiveness of proposed solutions and can help lead the way in selecting the improvement projects with the highest return on investment (ROI). The resulting forecasts of ROIs greatly improve a community’s chance of receiving federal funds.

Setting priorities helps with budgeting

While understanding the effectiveness of resiliency projects is critical, communities must also know how much resiliency they can afford. For cities and localities prone to flooding, a single resiliency asset can cost tens of millions of dollars, the maintenance of which could exhaust an entire capital improvement budget if planners let it. Using effective cost forecasting and schedule optimization tools that look at the long-term condition of existing assets, can help planners prioritize critical projects that require maintenance or replacement, while knowing exactly the impact these projects will have on local budgets and whether additional funding will be necessary.

It is imperative to structure a funding solution that can address these critical projects before they become recovery issues. Determining which communities are affected by the project is key to planning how to distribute equitable responsibility for the necessary funds to initiate the project. Once the beneficiaries of the project are identified, local governments can propose tailored funding options such as Special Purpose Local Option Sales Tax, impact fees, grants, and enterprise funds. The local funding can be used to leverage additional funds through bond financing, or to entice public-private partnership solutions, potentially with federal involvement.

Including flood resiliency in long-term infrastructure planning creates benefits for the community that go beyond flood prevention, while embracing master planning has the potential to impact all aspects of a community’s growth. Local efforts of this kind become part of a larger national resiliency strategy that goes beyond a single community, resulting in better prepared cities and a better prepared nation.

Atkins Beadenkopf EdEd Beadenkopf, PE, is a senior project director in SNC-Lavalin’s Atkins business with more than 40 years of engineering experience in water resources program development and project management. He has served as a subject matter expert for the Federal Emergency Management Agency, supporting dam and levee safety programs.

There’s a crack in California. It stretches for 800 miles, from the Salton Sea in the south, to Cape Mendocino in the north. It runs through vineyards and subway stations, power lines and water mains. Millions live and work alongside the crack, many passing over it (966 roads cross the line) every day. For most, it warrants hardly a thought. Yet in an instant, that crack, the San Andreas fault line, could ruin lives and cripple the national economy.

In one scenario produced by the United States Geological Survey, researchers found that a big quake along the San Andreas could kill 1,800 people, injure 55,000 and wreak $200 million in damage. It could take years, nearly a decade, for California to recover.

On the bright side, during the process of building and maintaining all that infrastructure that crosses the fault, geologists have gotten an up-close and personal look at it over the past several decades, contributing to a growing and extensive body of work. While the future remains uncertain (no one can predict when an earthquake will strike) people living near the fault are better prepared than they have ever been before.

...

https://www.popsci.com/extreme-science-san-andreas

Sunday, 25 February 2018 13:35

Extreme Science: The San Andreas Fault

Damage to reputation or brand, cyber crime, political risk and terrorism are some of the risks that private and public organizations of all types and sizes around the world must face with increasing frequency. The latest version of ISO 31000 has just been unveiled to help manage the uncertainty.

Risk enters every decision in life, but clearly some decisions need a structured approach. For example, a senior executive or government official may need to make risk judgements associated with very complex situations. Dealing with risk is part of governance and leadership, and is fundamental to how an organization is managed at all levels.

Yesterday’s risk management practices are no longer adequate to deal with today’s threats and they need to evolve. These considerations were at the heart of the revision of ISO 31000, Risk management – Guidelines, whose latest version has just been published. ISO 31000:2018 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions. Following are the main changes since the previous edition:

...

https://www.iso.org/news/ref2263.html

Thursday, 15 February 2018 15:54

The new ISO 31000 keeps risk management simple

Some things are hard to predict. And others are unlikely. In business, as in life, both can happen at the same time, catching us off guard. The consequences can cause major disruption, which makes proper planning, through business continuity management, an essential tool for businesses that want to go the distance.

The Millennium brought two nice examples, both of the unpredictable and the improbable. For a start, it was a century leap year. This was entirely predictable (it occurs any time the year is cleanly divisible by 400). But it’s also very unlikely, from a probability perspective: in fact, it’s only happened once before (in 1600, less than 20 years after the Gregorian calendar was introduced).

A much less predictable event in 2000 happened in a second-hand bookstore in the far north of rural England. When the owner of Barter Books discovered an obscure war-time public-information poster, it triggered a global phenomenon. Although it took more than a decade to peak, just five words spawned one of the most copied cultural memes ever: Keep Calm and Carry On.

...

https://www.iso.org/news/ref2240.html

Mahoning County is located on the eastern edge of Ohio at the border with Pennsylvania. It has a total area of 425 square miles, and as of the 2010 census, its population was 238,823. The county seat is Youngstown.

Challenges

  • Eliminate application slowdowns caused by backups spilling over into the workday
  • Automate remaining county offices that were still paper-based
  • Extend use of data-intensive line-of-business applications such as GIS

...

https://www.riverbed.com/customer-stories/mahoning-county-ohio.html

Anyone following enterprise data storage news couldn’t help but notice that aspects of the backup market are struggling badly. From its glory days of a couple of years back, the purpose-built backup appliance (PBBA), for example, has been trending downwards in terms of revenues per IDC.

"The PBBA market remains in a state of transition, posting a 16.2% decline in the second quarter of 2017," said Liz Conner, an analyst at IDC. "Following a similar trend to the enterprise storage systems market, the traditional backup market is declining as end users and vendors alike explore new technology."

She’s talking about alternatives such as the cloud, replication and snapshots. But can these really replace backup?

...

http://www.enterprisestorageforum.com/backup-recovery/data-storage-backups-vs-snapshots-and-replication.html