DRJ's Spring 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 32, Issue 1

Full Contents Now Available!

Industry Hot News

Industry Hot News (314)

In January, BlackRock accidentally leaked confidential sales data by posting spreadsheets unsecurely online – certainly not the first time we’ve seen sensitive information “escape” an organization. Incisive CEO Diane Robinette provides guidance companies can follow to minimize spreadsheet risk.

Several weeks ago, the world’s largest asset manager, BlackRock, accidentally posted a link to spreadsheets containing confidential information about thousands of the firm’s financial advisor clients. As reported by Bloomberg News, the link was inadvertently posted on the company’s web pages dedicated to BlackRock’s iShares exchange-traded funds. Included in these spreadsheets was a categorized list of advisors broken into groups identified as “dabblers” and “power users.”

While BlackRock was lucky in the fact that there was no financial information included on these spreadsheets, they are still left to deal with reputational damage. For the rest of us, this breach brings an important issue — spreadsheet risk management — back into the spotlight.

Despite years of rumors predicting the demise of spreadsheets, they are still widely used by businesses of every size. And why shouldn’t they be? Beyond providing an easy way to categorize clients and business partners, spreadsheets continue to meet the analytical needs of finance and business executives. They are especially useful for analyzing and providing evidentiary support for decision-making and for complex calculations where data is continuously changing. Yet, as we’ve seen time and time again, spreadsheets represent continued exposure to risk.



Wednesday, 20 March 2019 15:37

Lessons from BlackRock’s Data Leak

The sharp decline follows an FBI takedown of so-called "booter," or DDoS-for-hire, websites in December 2018.

The average distributed denial-of-service (DDoS) attack size shrunk 85% in the fourth quarter of 2018 following an FBI takedown of "booter," or DDoS-for-hire, websites, in December 2018, researchers report.

Late last year, United States authorities seized 15 popular domains as part of an international crackdown on booter sites. Cybercriminals can use booter websites (also known as "stresser" websites) to pay to launch DDoS attacks against specific targets and take them offline. Booter sites open the door for lesser-skilled attackers to launch devastating threats against victim websites.

About a year before the takedown, the FBI issued an advisory detailing how booter services can drive the scale and frequency of DDoS attacks. These services, advertised in Dark Web forums and marketplaces, can be used to legitimately test network resilience but also make it easy for cyberattackers to launch DDoS attacks against an existing network of infected devices.



Wednesday, 20 March 2019 15:35

DDoS Attack Size Drops 85% in Q4 2018

The #MeToo and #TimesUp movements brought the continuing problem of workplace misconduct onto the national stage, shining a light not only on the prevalence of harassment, but also on the dire need for effective processes to investigate when allegations are made. Clouse Brown Partner Alyson Brown discusses.

Confidential information
It’s in a diary
This is my investigation
It’s not a public inquiry.

— “Private Investigations,” Mark Knopfler/Dire Straits

It’s Friday. Thoughts are turning to the weekend ahead. The phone rings: We have a problem — I’ve gotten a complaint of sexual harassment against a senior VP. What do I do?

I’ve had variations of this call dozens of times. In the months since #MeToo and #TimesUp grabbed national headlines, the volume of calls about workplace complaints, especially those involving senior executives, has skyrocketed.

Employers and executives must act promptly when faced with these complaints. An effective workplace investigation can mean the difference between effective resolution and unwanted litigation. Moreover, in the current business environment, how employers investigate potential misconduct can affect that company’s reputation almost as much as the alleged conduct itself.

Consistent principles and procedures must be followed whenever allegations of misconduct are investigated. While volumes are written on how to ask questions and read body language, less guidance is available on the necessary pre-planning necessary for an effective investigation.



The automation, stability of infrastructure, and inherent traceability of DevOps tools and processes offer a ton of security and compliance upsides for mature DevOps organizations.

According to a new survey of over 5,500 IT practitioners around the world, conducted by Sonatype, "elite" DevOps organizations with mature practices, such as continuous integration and continuous delivery of software, are most likely to fold security into their processes and tooling for a true DevSecOps approach.

Throughout the "DevSecOps Community Survey 2019," responses show that mature DevOps organizations have an increasing awareness of the importance of security in rapid delivery of software and the advantages that DevOps affords them in getting security integrated into their software development life cycle.



The automation, stability of infrastructure, and inherent traceability of DevOps tools and processes offer a ton of security and compliance upsides for mature DevOps organizations.

According to a new survey of over 5,500 IT practitioners around the world, conducted by Sonatype, "elite" DevOps organizations with mature practices, such as continuous integration and continuous delivery of software, are most likely to fold security into their processes and tooling for a true DevSecOps approach.

Throughout the "DevSecOps Community Survey 2019," responses show that mature DevOps organizations have an increasing awareness of the importance of security in rapid delivery of software and the advantages that DevOps affords them in getting security integrated into their software development life cycle.



To make sure that homeowners are aware of the importance of flood insurance, the I.I.I. recently partnered with the Weather Channel.

A video posted to the Weather Channel’s Facebook page demonstrates just how destructive flooding can be; for example, in the video you can see the devastation from Hurricane Sandy wreaked on Breezy Point, a coastal community in Queens NY.

“What’s remarkable about flood insurance is that only 12 percent of people have it,” says Sean Kevelighan, I.I.I.’s CEO. One misconception that people have about flood insurance is that it’s included in a homeowners policy. But that’s not the case. A separate flood policy must be obtained. Flood insurance is mostly sold by FEMA’s National Flood Insurance Program, but some private insurers have begun offering it as well.



The latest twist in the Equifax breach has serious implications for organizations.

When the Equifax breach — one of the largest breaches of all time — went public nearly a year-and-a-half ago, it was widely assumed that the data had been stolen for nefarious financial purposes. But as the resulting frenzy of consumer credit freezes and monitoring programs spread, investigators who were tracking the breach behind the scenes made an interesting discovery.

The data had up and vanished.

This was surprising because if the data had, in fact, been stolen with the ultimate goal of committing financial fraud, experts would have expected it to be sold on the Dark Web. At the very least, they would have expected to see a wave of fraudulent credit transactions.




Wednesday, 20 March 2019 15:30

The Case of the Missing Data

(TNS) — Somerset County, Pa., will test its CodeRED emergency public mass notification system at 3 p.m. Tuesday, according to the county’s top emergency management official.

Joel Landis, director of the Somerset County Department of Emergency Services, said on Saturday that he urged business owners and members of the public to sign up prior to the test for the service, which is used to send notifications about emergency situations in the county by phone, email, text message and social media.

Landis said in an email that the “CodeRED system provides Somerset County public safety officials the ability to quickly deliver emergency messages to your landline or cell phone to targeted areas or the entire county.”

The CodeRED system is used to distribute information about emergencies such as evacuation notices, utility outages, water main breaks, fires, floods and chemical spills, according to information on Somerset County’s website.



A side-by-side comparison of key test features and when best to apply them based on the constraints within your budget and environment.

Crowdsourced security has recently moved into the mainstream, displacing traditional penetration-testing companies from what once was a lucrative niche space. While several companies have pioneered their own programs (Google, Yahoo, Mozilla, and Facebook), Bugcrowd and HackerOne now carve up the lion's share of what is a fast-growing market.

How does crowdsourced pen testing compare with traditional pen testing, and how does it differ in methodology? Does this disruptive approach actually make things better? Read on for a side-by-side comparison...



Wednesday, 20 March 2019 15:27

Crowdsourced vs. Traditional Pen Testing

While every tech vendor seems to lay claim to being an expert in digital transformation, it stands to reason that not all can be. For sure, there are many vendors with experience helping clients create new customer or employee digital experiences, but this experience doesn’t make them experts in digital business transformation.

For 20 years, Forrester has been extolling the virtues of improving customer experience – we’ve even proven the value of delivering world-class experiences, including digital experiences.

And over these years, many of our clients have successfully mapped customer journeys and improved touchpoints, all the while seeing gradual improvements in their Customer Experience Index (CX Index™) scores.

But what happens when everyone’s customer journeys are optimized and when all digital experiences begin to look similar? As customer expectations rise, you must invest to improve touchpoints just to remain competitive. Without a major shift in how your leadership thinks about digital, your firm will struggle to break out from the pack.



The Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (the Banking Royal Commission, or BRC) has been in Australian media headlines since the Commission was established on December 14, 2017. On February 4, 2019, the widely anticipated final report from Commissioner Hayne was released.

While Australian banks were BRC’s focus, international institutions watched with keen interest and made submissions to ensure their voices were heard, anticipating that the resulting regulations for financial institutions would be far stricter and structured.

However, the impact is not limited to the financial sector. Commissioner Hayne recommended a change to the regulators’ enforcement approach, which may transform the perceived soft touch of the country’s principal corporate watchdog, the Australian Securities and Investments Commission (ASIC).

For overseas companies operating in Australia, these changes may impact future engagements with the Australian regulator and the prospects of global settlements where multiple regulators are involved.



When I started my career in marketing analytics almost 20 years ago, the biggest challenge was wrangling first- and third-party data, joining them together, and analyzing customer patterns. It was like mining for gold; we wanted to discover something unique about our customers, a nugget that our marketing counterparts could use to craft customized messages or target more effectively. It took a lot of time (this was before the ad- and martech boom), but it was fun spending hours programming and running models to understand customer behaviors.

Well, it was fun for me. My colleagues may not agree.

So when I was asked to take over data management platform coverage, I geeked out in excitement. It was my time to learn more about how data-specific technologies automate the mundane tasks that I had to do years ago, and with new, quickly changing data sources.



(TNS) — Efforts are underway to help residents in recovery mode after four tornadoes left behind a path of damage across two Michigan counties.

The National Weather Service confirmed a pair of tornadoes touched down in Shiawassee County and two in Genesee County that damaged homes, barns, splintered trees and downed power lines leaving thousands in the dark.

An informational meeting is set for 3 p.m. Sunday, March 17 in the cafeteria at Durand High School, 9575 E. Monroe Drive, with emergency management and government officials to address items such as recovery efforts, resident/business resources for relief, and short/long-term housing needs.

Shiawassee County Sheriff Brian BeGole confirmed a local state of emergency has been declared after 61 homes were damaged — 20 deemed uninhabitable or destroyed — as well as 16 barns and two businesses by the tornadoes, including an EF-2 with winds up to 125 mph from Newburg Rd/Bancroft Rd to M-71 just to the southeast of Vernon.



The stakes are getting higher for CROs and compliance officers. Brenda Boultwood of MetricStream details why it’s increasingly imperative that risk and compliance professionals work hand in hand to address ongoing risks and strengthen organizational GRC efforts.

While risk and compliance functions have run on parallel tracks for years, 2019 is likely to witness a new level of synergy between the two groups as they collectively seek to help their organizations drive performance while preserving integrity.

Partnering in this effort will be the Chief Risk Officer (CRO) who, by virtue of his or her bird’s-eye view of organizational processes and hierarchies, is well-positioned to understand how compliance ties back to risk, where key issues or concerns might lie and how risk frameworks can be integrated with compliance to optimize value.

Some large banks have organizationally integrated their operational risk management functions with their regulatory compliance functions (or are in the process of doing so), but this is less important than understanding the synergies.

With that in mind, here are four specific areas where I believe the CRO can impact compliance in 2019:



(TNS) - Across West Virginia at about 10:30 a.m. on Tuesday, sirens will blare, weather alert radios will activate and test emergency broadcast messages will interrupt television and radio programming as a statewide tornado test alert begins.

Federal, state and county emergency officials urge West Virginia families, businesses, hospitals, nursing homes, schools and government agencies to use the test alert to simulate what actions would be taken in the event of a real tornado emergency, and to update emergency plans as needed.

“Testing your emergency plan, whether with family members or co-workers, helps ensure we will all be ready for the next severe weather event in the state,” said Michael Todorovich, director of the West Virginia Division of Homeland Security and Emergency Management.

“This is the time to work through your emergency plans and to ensure you know what to do if an actual tornado occurs in Kanawha County,” said Kanawha County Commission President Kent Carper.

In the event of a real tornado warning, families are advised to gather in the basements of their homes, or in small, interior rooms with no windows on the home’s lowest level, until the warning ends. If traveling in vehicles when a tornado warning is issued, avoid parking below overpasses or bridges and choose a low, flat site to wait out the warning.



(TNS) - More practical — and perhaps more stylish — than the latest fashion handbag, a bright red emergency preparedness "go bag" distributed by the Department of Homeland Security might be even harder to land than next season's Fendi.

These red backpacks containing items from packets of water to hand-cranked radios are limited in distribution to senior citizens and people with disabilities who attend emergency preparedness training workshops, such as the one put on Wednesday afternoon at the office of the Cape Organization for Rights of the Disabled on Bassett Lane.

But while not everyone can get their hands on one of the DHS go bags, every adult on Cape Cod can learn to develop a response for dealing with natural disasters and other emergencies, said Barnstable police Lt. John Murphy, who attended Wednesday's program with Barnstable Police Sgt. Thomas Twomey.

"The most important thing is the preparedness part," Murphy said. "Get the message out. That is the goal of these types of programs."



Monday, 18 March 2019 15:32

Prepared for Disaster in Cape Cod

Gone are the days when the workplace was built around a fairly straightforward structure, consisting of employer, employee, customer. The winds of technological change may be sweeping away traditional models, but ISO 27501 is helping managers build a more sustainable one for the future.

From the advent of the Internet to what is now known as the Fourth Industrial Revolution, the latest cutting-edge technologies – among them robotics, artificial intelligence (AI), the Internet of Things – are fundamentally changing how we live, work and relate to each other. The issue for business in this new era is not so much about the bottom line, or even just corporate social responsibility, it is also about taking a human-centred approach to the future of work and finding the right tools to ensure that organizations are successful and sustainable.

The likes of AI are presenting a great opportunity to help everyone – leaders, policy makers and people from all income groups and countries – to lead more enriching and rewarding lives, but they are also posing challenges for how to harness these technologies to create an inclusive, human-centred future.

ISO 27501:2019The human-centred organization – Guidance for managers, can help organizations to meet these challenges. In this brave new world, organizations will not only have an impact on their customers but also on other stakeholders, including employees, their families and the wider community.



Geary Sikich explains why he believes that Brexit is a Black Swan event and describes various issues that enterprise risk managers should consider when assessing and managing Brexit risks.

In his book, ‘The Black Swan: The Impact of the Highly Improbable’, Nassim Taleb defines a Black Swan in the Prologue on pages xvii – xviii, xix, xx – xxi, xxv, xxvii.  I quote a few (what I consider) key points:

xvii: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes:

First is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility.

Second, it carries extreme impact.

Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

xxv: “The Platonic fold is the explosive boundary where the Platonic mindset enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide.  It is here where the Black Swan is produced.”

xxvii: “To summarize: in this (personal) essay, I stick my neck out and make a claim, against many of our habits of thought, that our world is dominated by the extreme, the unknown, and the very improbable (improbable according to our current knowledge)…”

To summarize:

A Black Swan is a highly improbable event with three principal characteristics: it is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was.

Taleb continues by recognizing what he terms the problem: “Lack of knowledge when it comes to rare events with serious consequences.”



Lesley Maea suggests compliance today could take a cue from Marie Kondo in her Netflix hit, “Tidying Up.” To remain safe and secure, use an intranet as a single source of truth. Yes, you read that right: an intranet.

Put everything in one place. Then, you can see what you have and get rid of what you don’t need. That’s one of the organization methods Marie Kondo uses in her Netflix hit, “Tidying Up.”

Our organizational lives are like a lot of Marie’s clients. Your files are likely stacked up, spilling out or otherwise in disarray throughout your office. Some of you might be thinking, “You haven’t seen my office. I’m positively fastidious.” Well, then, let’s talk about your digital files.

Every organization — every department, even every computer — could use a little digital organization to increase compliance. Especially when it comes to employee handbooks, compliance training and policies and procedures, your employees likely don’t even know where to find the files. If they can find them, they’re probably out of date anyway.

So, let’s put everything in one place to provide employee access, keep it up to date and save your organization money.



Monday, 18 March 2019 15:28

Compliance Can Spark Joy, Right?

The DDoS threat landscape has developed rapidly leaving many organizations behind in both their perception of the risks and their actions to protect against them. Rolf Gierhard looks at the most dangerous and pervasive misunderstandings about DDoS attacks…

Most organizations understand that DDoS attacks are disruptive and potentially damaging. But many are also unaware of just how quickly the DDoS landscape has changed over the past two years, and underestimate how significant the risk from the current generation of attacks has become to the operation of their business. Here, I’m going to set the record straight about seven of the biggest misconceptions that I hear about DDoS attacks.

There are more important security issues than DDoS that need to be resolved first

When it comes to cyber attacks, the media focuses on major hacks, data breaches and ransomware incidents. DDoS attacks are growing rapidly in scale and severity: the number of attacks grew by 71 percent in Q3 2018 alone, to an average of over 175 attacks per day, while the average attack volume more than doubled according to the Link11 DDoS Report. The number of devastating examples is large. In late 2017, seven of the UK’s biggest banks were forced to reduce operations or shut down entire systems following a DDoS attack, costing hundreds of thousands of pounds according the UK National Crime Agency. And in 2018, online services from several Dutch banks and numerous other financial and government services in the Netherlands were brought to a standstill in January and May. These attacks were launched using Webstresser.org, the world's largest provider of DDoS-on-demand, which sold attack services for as little as £11. It costs a criminal almost nothing and requires little to no technical expertise to mount an attack, but it costs a company a great deal to fix the damage they cause.

What’s more, DDoS attacks are often used as a distraction, to divert IT teams’ attention away from attempts to breach corporate networks. As such, dealing with DDoS attacks should be regarded as a priority, not a secondary consideration. 



GandCrab's evolution underscores a shift in ransomware attack methods

Don't be fooled by the drop in overall ransomware attacks this past year: Fewer but more targeted and lucrative campaigns against larger organizations are the new MO for holding data hostage.

While the number of ransomware attacks dropped 91% in the past year, according to data from Trend Micro, at the same time some 75% of organizations stockpiled cryptocurrency. The majority that did also paid their attackers the ransom, according to a Code42 study. Overall, more than 80% of ransomware infections over the past year were at enterprises, as cybercrime gangs began setting their sights on larger organizations capable of paying bigger ransom amounts than the random victim or consumer.

The evolution of the prolific GandCrab ransomware over the past few months demonstrates how this new generation of more selective attacks is more profitable to the cybercriminals using it - and underscores how the ransomware threat is far from over.



Monday, 18 March 2019 15:25

Ransomware's New Normal

FEMA’s Integrated Public Alert & Warning System (IPAWS) now includes a new event code called Law Enforcement Blue Alert, or ‘Blue Alert’. 

The new BLU event code is available for selection with the IPAWS Emergency Alert System (EAS), with future plans to release it to Wireless Emergency Alerts (WEA).

The ‘Blue Alert’ provides officials with the ability to alert the public when a law enforcement officer has been injured, killed or is missing. The alert will push real-time information to the public, like the location of the incident and any identifying information – such as suspect or vehicle description – to help locate possible suspects.

Blue Alerts will be transmitted to television and radio stations with EAS and later to cellphones and wireless devices with WEA. Similar to current Amber Alerts for missing children, Blue Alerts enable agencies to rapidly disseminate information to other law enforcement agencies, the public and media outlets.



So you’ve just been put in charge of business continuity at your organization. What’s the first thing you should do? In today’s post, we’ll tell you—and also explain why it’s important and how to go about it.


Many people find themselves thrust into a business continuity (BC) role with little warning or preparation.

They frequently come from backgrounds in risk management, auditing, compliance, or IT.

It’s a daunting prospect to suddenly find yourself in charge of Business Continuity/Disaster Recovery (BC/DR) for even a small organization. It’s like being thrown in the deep end as a beginning swimmer.

Unless you have ice in your veins, or significant BC/DR experience elsewhere, you’re likely to feel overwhelmed. You will have to take time to educate yourself on your new responsibilities, and the learning never stops.

But the very first task is always the same.



Cybrary’s Joseph Perry shares the importance of corporate responsibility and how to navigate the operational and reputational challenges in response to a breach.

The rise of data breaches is well-documented, with thousands taking place every year and at least two or three annually for most organizations. In other words, it’s a question of when – not if – your organization will be affected.

With the element of surprise long gone, so too are any excuses for not having a strategy in place for managing these breaches. And in light of the fact that privacy and cybersecurity are now high-profile concerns in the public eye, it’s increasingly clear that any successful strategy will be built on a solid foundation of corporate responsibility.

Let’s take a closer look at why enhancing corporate responsibility is such an important – and often neglected – component of surviving a breach with your reputation intact. Then I’ll share four practical tips to help move the needle in that direction for your own company.



(TNS) — Residents in Lancaster and DeSoto had an unwanted wake-up call Tuesday when a malfunction set off warning sirens.

The sirens sounded around 2:20 a.m. and didn't go silent until sometime after 3. But unlike Saturday morning, there was no severe weather in the area.

"The Emergency Outdoor Warning Sirens have malfunctioned and are automatically sounding. We are currently working to address the concern, and will provide follow-up as quickly as possible," read a post on the city of Lancaster's Nextdoor page. "Sorry about the inconvenience."

At 4:11 a.m., the city of DeSoto issued a tweet that read, "Hopefully, by now they are all quiet."

The city also alerted residents via its CodeRed notification system saying everything was all clear and there was no emergency.



Researchers have developed a new model which shows that the probability of a catastrophic geomagnetic storm occurring is much lower than previously estimated; but the risk still needs to be taken seriously.

Three mathematicians and a physicist from the Universitat Autònoma de Barcelona (UAB), the Mathematics Research Centre (CRM) and the Barcelona Graduate School of Mathematics (BGSMath) have proposed a mathematical model which allows making reliable estimations on the probability of geomagnetic storms caused by solar activity.

The researchers, who published the study in the journal Scientific Reports (of the Nature group) in February 2019, calculated the probability in the next decade of a potentially catastrophic geomagnetic storm event, such as the one which occurred between the end of August and beginning of September 1859, known as the ‘Carrington Event’. Such an event could create major issues for telecommunications and electricity supply systems across the Earth.

In 1859, astronomer Richard C. Carrington observed the most powerful geomagnetic storm known to date. According to this new research, the probability of a similar solar storm occurring in the following decade ranges from 0.46 percent to 1.88 percent, far less than the percentage estimated before.



(TNS) — A tornado was confirmed in Loving Tuesday night, as heavy wind, rain and hail moved through Eddy County and southeast New Mexico into West Texas.

Eddy County Emergency Manager Jennifer Armendariz said video footage confirmed the tornado touched down at about 5 p.m. in Loving in southern Eddy County.

She said no damage was reported despite accounts of golf-ball-sized hail, and after about two hours the storm had mostly cleared.

Multiple shelters were set up throughout the county and Armendariz said staff was sent home by about 7 p.m.

A unit from the Eddy County Office of Emergency Management was sent out to Loving to perform "recon," Armendariz said, and assess the damage.



Where do I start?

This is a conversation and situation I’ve had many times with different people, and it may feel familiar to some of you. You’ve been tasked with developing a BC/DR program for your organization. Assuming you have nothing or little in place, and what you do have is so out of date that you’re feeling that it would be wise to start fresh. The question invariably comes up: Where do I start?

Depending on your training or background this may start with a Business Impact Analysis (BIA) in order to prioritize and analyze your organization’s critical processes. If you have a security or internal audit background you may feel inclined to start with a Risk Assessment. You may have an IT background and feel that your application infrastructure is paramount, and you need a DR program immediately. If you’ve come from the emergency services or military, life and safety might be at the foremost in your mind and emergency response and crisis management might be the first steps. I’ve seen clients from big pharmaceuticals that need to prioritize their supply chain as their number one priority.

The reality is that although there are prescribed methodologies with starting points outlined in best practices by various institutes and organizations with expertise in the field, there is only one expert when it comes to your organization. You.



Most organizations are doing all they can to keep up with the release of vulnerabilities, new research shows.

Security has no shortage of metrics — everything from the number of vulnerabilities and attacks to the number of bytes per second in a denial-of-service attack. Now a new report focuses on how long it takes organizations to remediate vulnerabilities in their systems — and just how many of the vulnerabilities they face they're actually able to fix.

The report, "Prioritization to Prediction Volume 3: Winning the Remediation Race," by Kenna Security and the Cyentia Institute, contains both discouraging and surprising findings.

Among the discouraging findings are statistics that show companies have the capacity to close only about 10% all the vulnerabilities on their networks. This percentage doesn't change much by company size.



About this time each year – when the SEC’s Office of Compliance Inspections and Examinations (OCIE) releases its annual Examination Priorities – we are reminded of how complex compliance can be for SEC-registered firms. As Duff & Phelps’ Chris Lombardy explains, this year is no exception.

In its 2019 Examination Priorities, issued on December 20, 2018, OCIE has outlined six themes that it will primarily, but not exclusively, focus on in the coming months. One new theme, digital assets, joins the five priorities that repeat from 2018:

  1. Matters of importance to retail investors, including seniors and those saving for retirement
  2. Compliance and risk in registrants responsible for critical market infrastructure
  3. Select areas and programs of FINRA and MSRB
  4. Digital Assets (cryptocurrencies, coins and tokens)
  5. Cybersecurity
  6. Anti-money laundering



Wednesday, 13 March 2019 15:17

How Defensible Is Your Compliance Approach?

Attackers used a short list of passwords to knock on every digital door to find vulnerable systems in the vendor's network.

The recent cyberattack on enterprise technology provider Citrix Systems using a technique known as password spraying highlights a major problem that passwords pose for companies: Users who select weak passwords or reuse their login credentials on different sites expose their organizations to compromise.

On March 8, Citrix posted a statement confirming that the company's internal network had been breached by hackers who had used password spraying, successfully using a short list of passwords on a wide swath of systems to eventually find a digital key that worked. The company began investigating after being contacted by the FBI on March 6, confirming that the attackers appeared to have downloaded business documents. 

Password spraying and credential stuffing have become increasingly popular, so companies must focus more on defending against these types of attacks, according to Daniel Smith, head of threat research at Radware.



Wednesday, 13 March 2019 15:15

Citrix Breach Underscores Password Perils

(TNS) - Next month marks the ninth anniversary of the British Petroleum Deepwater Horizon oil rig explosion off the coast of Louisiana that killed 11, injured 17 others, and spewed millions of gallons of oil into the Gulf of Mexico.

For those of us closest to the accident, the April 20, 2010, explosion will always be, first and foremost, a grave tragedy. But for analysts who study such things, the mishap is also something else: a case study yielding insights about how similar mistakes might be prevented in the future.

Or so we’ve been reminded by “Meltdown,” a 2018 book by Chris Clearfield and András Tilcsik that’s just been published in paperback. The subtitle of “Meltdown” is “What Plane Crashes, Oil Spills, and Dumb Business Decisions Can Teach Us About How to Succeed at Work and at Home.”

Clearfield is a former derivatives trader who lives in Seattle. Tilcsik, who researches organizational behavior, lives in Toronto. “Meltdown” is about a number of systems failures, including Deepwater Horizon, a crash on the Washington, D.C. metro, and an accidental overdose in a state-of-the-art hospital.



Flexible workspaces are saving companies time and money when disaster strikes, says Joe Sullivan, Head of Workplace Recovery Product at Regus

According to the 2019 WEF Global Risks Report, ‘extreme weather events’ are the biggest risk we face as an international community, with natural disasters, data fraud and cyber-attacks following close behind. Preventing the unpredictable is beyond our control. What we can manage, however, is our level of preparation when disaster strikes.

At Regus, we speak from experience. In September 2018, the effects of Hurricane Florence impacted some of our centres in North Carolina, South Caroline and Virginia. The devastation was felt by so many of our colleagues, clients and their friends and family. Thankfully, our North America teams were read to step in and help recover these facilities while taking care of our customers.

The financial cost of disasters such as this can be difficult to absorb. Since 2000, natural disasters have cost the global economy more than $2.4trn – more than $150m each year. But it’s not just the headline-grabbing incidents that affect businesses. It’s the everyday ones, too.  A burst water pipe in your office may not sound like much of a threat but, if it means your premises are unusable for a month, what’s your back-up plan?



A new guide from the Cloud Security Alliance offers mitigations, best practices, and a comparison between traditional applications and their serverless counterparts.

Serverless computing has seen tremendous growth in recent years. This growth was accompanied by a flourishing rich ecosystem of new solutions that offer observability, real-time tracing, deployment frameworks, and application security.

As awareness around serverless security risks started to gain attention, scoffers, and cynics repeated the age-old habit of calling "FUD" — fear, uncertainty and doubt — on any attempt to point out that while serverless offers tremendous value in the form of rapid software development and huge reduction in TCO, there are also new security challenges.



Wednesday, 13 March 2019 15:12

The 12 Worst Serverless Security Risks

Courtesy of Mail-Gard




drj 2019 previewMail-Gard has the opportunity to exhibit at many industry shows and conferences, but one of our go-to events is the DRJ Spring conference, which is being held March 24­–27 at the Disney Coronado Springs Resort in Orlando, FL. We can always count on the Disaster Recovery Journal (DRJ) to host an informative and invaluable conference that attracts speakers and attendees from all areas of the business continuity (BC), disaster recovery (DR), and risk management (RM) fields. For us, it’s a chance to connect with leaders and participants in our shared industry.

Risk Management is the Focus of DRJ Spring 2019

The theme of this spring’s conference is “Managing Risk in an Uncertain World,” and it’s certainly true that our world has become unpredictable in many ways. One of the things we’ve learned at Mail-Gard is that it’s truly impossible to plan for every possible emergency situation, but what we can do is to plan and prepare to manage the risks that we’re aware of and to refresh our recovery solutions on a regular basis so change and uncertainty become manageable, as well.

The DRJ Spring 2019 Conference gives us the opportunity to meet with current clients looking to polish up their DR plans while enhancing their industry knowledge by taking a few classes. In addition, we also get to talk to people who either don’t have a DR plan at all, or who have realized that their DR vendor isn’t working. In either case, this is where Mail-Gard shines, because our focus is helping companies achieve their risk management goals. We assist companies in designing print-to-mail recovery solutions or helping them fix what’s wrong with their current plan.

For Mail-Gard, another advantage of attending the DRJ Spring 2019 conference will be the opportunity to brush up on the latest trends in BC/DR, such as cyber security, which is a moving target for planning and updating procedures. In fact, DRJ states, “When it comes to business continuity, what worked a year ago will not be effective today,” which is why risk management is a never-ending job. As a print-to mail disaster recovery provider, Mail-Gard represents a different element within the larger BC/DR arena, but it’s a vital part of a successful BC/DR plan. In fact, we consider it the most important component, which is why it’s the sole focus of our business.

As a DR print-to-mail specialist, Mail-Gard has many advantages over our competition who offer DR mailing support as a sideline. Critical mailings are critical for a reason, whether financial or regulatory, and it’s surprising how often they’re overlooked or minimized in favor of the trending DR issues of the day. If you’re in Orlando during the last week in March, please stop by Mail-Gard Booth #706 at DRJ Spring 2019. The Mail-Gard group would welcome the opportunity to help you make sure that your DR plan is cleaned up, complete, and ready for spring.

MichaelHMichael Henry

Vice President of Mail-Gard with more than 30 years of experience in direct mail. Specializes in leading and directing operations teams by simplifying, staying focused, and being relentless. Proud to be part of an organization that cares about its people. Longtime Philadelphia Eagles season ticket holder who also loves the Phillies and Flyers, being near the water, and coaching his kids’ sports activities.

My colleagues J. P. Gownder, Craig Le Clair, and I just published the results of a year-long study to answer the question “What happens when digital business systems and physical-world processes come together?” The answer: Atoms get their revenge. By that we mean that so much of our attention has been focused on digital business over the past decade that we have almost forgotten where business happens — in the real world.

What about eCommerce, online trading, and digital platforms? Yes, they are digital, but at the end of the day, it is still humans —sitting at their desks, in hotels, on airplanes, in the plant, at ball games, or at conferences — that drive most of the decisions around who buys what and how much, even if they’re made by programming algorithms. And all of that happens in the world of atoms. A big takeaway from our report is that when algorithms start to act on the physical world, firms have the opportunity to change their relationship with their customers. In other words, algorithms plus atoms balance the power between customers and businesses. We see savvy businesses deploying algorithms in the real world to balance customer engagement and efficient operations.

Consider, for example, innovative startup DocBox. It makes a clinical process management solution for hospitals that promises to help clinicians eliminate medical mistakes, improve clinical workflows and processes, and free up time. At the heart of its solution is a “patient area network” that integrates data from bedside machines, making insights available to doctors. While that is good for doctor and patient engagements, providers are exploring how to drive intelligence into logistics and operations to ensure that high-value capital equipment is placed and used efficiently as well.



Evan Francen, CEO of FRSecure and Security Studio, makes the case for adopting a third-party information security risk management (TPISRM) program. He outlines how to get started and explains why the common excuses for ignoring the risks don’t hold water.

Third-party information security risk management (TPISRM*) is more critical today than it’s ever been. There is little doubt amongst information security experts that TPISRM is essential to the success (or failure) of your information security efforts, but the confusion in the marketplace is making it difficult to tell truth from hype. Ignoring the risks won’t make them go away, so something must be done. We just need to make sure it’s the right “thing.”

The Case for TPISRM

If the case for TPISRM isn’t obvious to you, you’re not alone. Only 16 percent of the 1,000 Chief Information Security Officers (CISOs) surveyed in a recent study claim they can effectively mitigate third-party risks, while 59 percent of these same CISOs claim their organizations have experienced a third-party data breach.

Third parties are implicated in up to 63 percent of all data breaches and regulators are increasingly scrutinizing how organizations handle third-party risks. Your organization can spend millions of dollars on a secure infrastructure, best-in-class training and awareness solutions and the most skilled professionals, but if you neglect to account for third-party risks, some or all of your investment is a waste.

Please let these numbers sink in for a moment. Logically, how do we deny the need for sound and cost-effective TPISRM when we know that it will decrease the likelihood and impact of a data breach? Logic says one thing, yet 57 percent of organizations don’t even have an inventory of the third parties they share sensitive information with.



It has been noted numerous times, in multiple studies, that building occupants often ignore or are slow to respond to standard fire alarm sounders: this is ‘bystander apathy’. This article looks at the issue and suggests some solutions.

Bystander apathy – a condition where people ignore an emergency when they believe someone else will take responsibility – is the social psychological phenomena that can affect the pre-movement phase of an escape, prolonging the time it takes before people react to an audible alarm. 

“There are multiple explanations as to why we have a natural tendency to dismiss alarms and any delay could prove critical or at worst, catastrophic,” says Steve Loughney of Siemens Building Technologies.  “People respond to others around them and a collective position often emerges during emergencies i.e. if one person moves, there is a likelihood that others will follow with the reverse also true.”

“Doubts about the validity of warning sirens might also stem from loss of confidence we have in standard fire alarm systems. Nuisance alarms or false alarms have lulled us into a situation where blaring sounds or klaxons are often casually dismissed as non-emergency or non-life threatening,” continues Mr. Loughney.

This lack of urgency was echoed in studies by the International Rescue Committee when it found that less than 25 percent of occupants interpreted the sound of the fire alarm as a potential indication of a real emergency during mid-rise residential evacuation trials.



Coinhive has remained on top of Check Point Software's global threat index for 15 straight months

Cryptominers continue to dominate the malware landscape, just as they did all of 2018. But a decision by cryptocurrency mining service Coinhive to shut down last week could change that soon, security vendor Check Point Software said in its latest malware threat report, released Monday.

Coinhive has topped Check Point's global threat index for 15 straight months, including this February.

Coinhive's software is designed to give website owners a way to earn revenue by using the browsers of site visitors to mine for Monero cryptocurrency. The software itself — like many other cryptominers — is not malicious. However, cybercriminals have been using Coinhive extensively to surreptitiously mine for Monero on hacked websites, making it a top threat to website operators globally in the process. Many websites that have installed Coinhive also have done so without explicitly informing site visitors about it.



Industry leaders debate how government and businesses can work together on key cybersecurity issues

If money were no object, and you didn't have to worry about bureaucracy or politics, what would you have your organization do to make a difference in the public-private sector discourse on cybersecurity? How would you improve tactics and techniques?

"The thing I'd love to be able to do is share in real time," said Neal Ziring, technical director for the National Security Agency's Capabilities Directorate. The question was posed to him, and two other panelists from the public and private sectors, in the RSA Conference panel "Behind the Headlines: A Public-Private Discourse on Cyber-Defense," last week in San Francisco.

Ziring explained how if policy were not an issue, he would want to take NSA's foreign intelligence and turn it into actionable warnings in real time. "That's not easy. We're trying to work in that direction," he said, adding that there are "considerable policy obstacles to that right now."

Defenders are overwhelmed with an onslaught of threat data, user error, poor endpoint protection tools, and myriad other factors making their jobs harder. This discussion brought together security experts to put the spotlight on which threats should be prioritized and how the government and private sector can better improve their relationships to address them.



(TNS) - The State Emergency Management Agency officially announced Thursday that the Federal Emergency Management Agency has awarded three Missouri school districts $3.5 million in grant funding to build tornado safe rooms.

These include the previously reported tornado shelter that will be added on to the Neosho School District's new Goodman Elementary School, as well as a stand-alone safe room on the Miller School District's high school campus in Lawrence County, and a safe room on the elementary and middle school campus of Christian County's Sparta School District.

"This is the actual, final step that they put in writing and now we have the official agreement," said Jim Cummins, Neosho superintendent. "What we (Neosho) had before was a verbal phone call from SEMA saying we had been approved for it, and now they have just put it in writing."

The three safe rooms would be capable of sheltering a total of more than 2,250 people combined, according to a SEMA news release.



Applicants to three private colleges this week discovered just how steep the price of admission can run.

Hackers breached the system that stores applicant information for Oberlin College in Ohio, Grinnell College in Iowa and Hamilton College in New York and emailed applicants, offering them the chance to buy and view their admissions file. For a fee, the sender promised access to confidential information in the applicant’s file, including comments from admissions officers and a tentative decision. The emails demanded thousands of dollars in ransom from prospective students for personal information the hackers claimed to have stolen.

All three schools use Slate, a popular software system, to manage applicants’ information. Slate is used by more than 900 colleges and universities worldwide. The company is not aware of other affected colleges, said Alexander Clark, chief executive of Technolutions, Slate’s parent company. Officials from the affected schools declined to comment on the scope of the data breach.



There are a lot of ways that business continuity programs go off track. Here are some of the main ones, together with a list of what successful programs do to keep rolling along.

We are seeing an increase in the number of companies that recognize that a business continuity program is a must-have.

This is great, but it’s still the case that too many programs are floundering.

In on our experience working as BC consultants for firms of a range of sizes and industries, we see the same problems come up again and again.

If you’re just starting a program, do yourself a favor: Try not to make any of the mistakes listed below.



How do you create an insights-driven organization? One way is leadership. And we’d like to hear about yours.

Today, half of the respondents in Forrester’s Business Technographics® survey data report that their organizations have a chief data officer (CDO). A similar number report having a chief analytics officer (CAO). Many firms without these insights leaders report plans to appoint one in the near future. Advocates for data and analytics now have permanent voices at the table.

To better understand these leadership roles, Forrester fielded its inaugural survey on CDO/CAOs in the summer of 2017. Now we’re eager to learn how the mandates, responsibilities, and influence of data and analytics leaders and their teams have evolved in the past 18 months. Time for a new survey!

Take Forrester’s Data And Analytics Leadership Survey

Are you responsible for data and analytics initiatives at your firm? If so, we need your expertise and insights! Forrester is looking to understand:

  • Which factors drive the appointment of data and analytics leaders, as well as the creation of a dedicated team?
  • Which roles are part of a data and analytics function? How is the team organized?
  • What challenges do data and analytics functions encounter?
  • What is the working relationship between data and analytics teams and other departments?
  • What data and analytics use case, strategy, technology, people, and process support do these teams offer? How does the team prioritize data and analytics requests from stakeholders?
  • Which data providers do teams turn to for external data?
  • Which strategies do teams use to improve data and analytics literacy within the company?

Please complete our 20-minute (anonymous) Data and Analytics Leadership Survey. The results will fuel an update to the Forrester report, “Insights-Driven Businesses Appoint Data Leadership,“as well as other reports on the “data economy.”

For other research on data and analytics leadership, please also take a look at “Strategic CDOs Accelerate Insights-To-Action” and “Data Leaders Weave An Insights-Driven Corporate Fabric.”

As a thank-you, you’ll receive a courtesy copy of the initial report of the survey’s key findings.

Thanks in advance for your participation.


Friday, 08 March 2019 16:27

Data And Analytics Leaders, We Need You!

As more enterprise work takes place on mobile devices, more companies are feeling insecure about the security of their mobile fleet, according to a new Verizon report.

SAN FRANCISCO – As more enterprise work takes place on mobile devices, more companies are feeling insecure about the security of their mobile fleet. That's one of the big takeaways from Verizon's "Mobile Security Index 2019," released here this week.

The report is based on responses from 671 enterprise IT professionals from a wide range of business sizes across a broad array of industries. The picture they paint in their responses is one where mobile security is a major concern that's getting worse, not better, as time goes on.

More than two-thirds (68%) say the risks of mobile devices have grown in the past year, with 83% now saying their organizations are at risk from mobile threats. Those risks have changed in the year since the first edition of the "Mobile Security Index."

"In the first iteration, organizations were more nervous about losing access to the device itself" through theft or accidental loss, said Matthew Montgomery, a director with responsibilities for business operations, sales, and marketing at Verizon, in an interview at the RSA Conference. This time, they are worried about " ... having a breach or losing access to the data, because the device became very centric to businesses in the way they work."



Comforte AG’s Jonathan Deveaux stresses that while compliance with the GDPR is a worthy goal, adhering to the regulation doesn’t necessarily mean your organization is safe. Consider both compliance and security a journey, not a destination.

The European General Data Protection Regulation (GDPR) came into effect on May 25, 2018, ushering in a new era of data compliance regulation across the world. GDPR-like regulations have emerged in Brazil, Australia, Japan and South Korea, as well as U.S. states such as New York and California.

The GDPR was introduced to protect EU individuals’ personal information, collected by organizations, through regulation on how the data can be collected and used. Even though it is European law, the scope of the legislation effects organizations around the world.

Despite a two-year phase-in period (May 24, 2016 to May 25, 2018), many organizations around the globe remain noncompliant. A GDPR pulse survey by PwC in November 2017 revealed only 28 percent of U.S. companies had begun preparing for GDPR, and only 10 percent responded saying they were compliant.



Social engineering scam continued to be preferred attack vector last year, but attackers were forced to adapt and change.

The growing sophistication of tools and techniques for protecting people against phishing scams is forcing attackers to adapt and evolve their methods.

A Microsoft analysis of data collected from users of its products and services between January 2018 and the end of December showed phishing was the top attack vector for yet another year. The proportion of inbound emails containing phishing messages surged 250% between the beginning and end of 2018. Phishing emails were used to distribute a wide variety of malware, including zero-day payloads.

However, the growing use of anti-phishing controls and advances in enterprise detection, investigation, and response capabilities is forcing attackers to change their strategies as well. Microsoft said.

For one thing, phishing attacks are becoming increasingly polymorphic. Rather than using a single URL, IP address, or domain to send phishing emails, attackers last year began using varied infrastructure to launch attacks, making them harder to filter out and stop.



ERP Maestro’s CEO Jody Paterson discusses cybersecurity risk disclosure and compliance and how executives are being held more personally accountable for nondisclosure as outlined by the SEC.

Companies face a multitude of risks and threats. Reporting them to stakeholders and investors is a requirement, and serious consequences may ensue for a failure to do so – for the company and, increasingly, for business leaders. It’s a liability no company wants and a personal disaster no executive wishes to encounter. To prevent such catastrophes for the latter, individuals need to understand how they may be accountable.

For public companies, disclosing business risks has long been mandatory on periodic reports, such as annual reports, 10-K forms, quarterly 10-Qs and 8-K current incident reports as needed.

As technology has become not only the primary offering of many companies, but also the norm for business operations and financial management, external risks, such as security breaches and cyberattacks, have been included in in the Security and Exchange Commission’s (SEC) risk reporting requirements.



Over and over, clients tell us they just don’t get enough funding for the kind of privacy programs they want to create. In fact, many privacy budgets shrank in 2019, after firms were forced to spend more than they expected on GDPR compliance in 2018. But what if we told you that customer-centric privacy programs could actually drive a positive ROI — would your CFO find the budget then? We’re betting so.

That’s why we recently built a Total Economic Impact model on the ROI of privacy. We were convinced that there’s more to privacy investments than CYA, and we were right.



(TNS) — They started Alabama's way from Louisiana as soon as word went out about Sunday's deadly tornadoes in Lee County. It was the same when Hurricane Michael flattened Mexico Beach, Fla., last year. It's been the same since 2016. People were in trouble, and they went on the road.

They're called the Cajun Navy, but they're not one organization. The Louisiana Secretary of State's website lists 11 different organizations with "Cajun Navy" in their name. The best known, perhaps, is Cajun Navy 2016. It is named for the year it was founded by two friends in Baton Rouge after they had volunteered in the catastrophic flooding there.

"We're the ones that have been to the White House multiple times," Vice President Billy Brinegar said Tuesday. "We do things the right way. We try to get involved with the local EOCs (Emergency Operations Centers) or fire departments or whoever, just coordinate with them so they know we're on the scene and we work together."



Why Do Bots Fail to Scale Across the Enterprise?

The interest in RPA has skyrocketed, and company leaders are challenging their teams to find out more about the technology and its associated benefits.  With the increased interest in RPA, we have seen a significant uptick in teams testing the RPA waters by starting Bot development and implementation pilots.  What we have also found is that teams are struggling to move beyond the pilots due to some fundamental errors made during RPA Program Setup and Execution and Bot Development and Implementation.

RPA Program Setup and Execution

What we find is that there is a lack of an RPA enterprise strategy and foundation, and a lack of understanding about RPA, solution capabilities and where to focus efforts.



Whitefly is exploiting DLL hijacking with considerable success against organizations since at least 2017, Symantec says. 

Whitefly, a previously unknown threat group targeting organizations in Singapore, is the latest to demonstrate just how effective some long-standing attack techniques and tools continue to be for breaking into and maintaining persistence on enterprise networks.

In a report Wednesday, Symantec identified Whitefly as the group responsible for an attack on Singapore healthcare organization SingHealth last July that resulted in the theft of 1.5 million patient records. The attack is one of several that Whitefly has carried out in Singapore since at least 2017.

Whitefly's targets have included organizations in the telecommunications, healthcare, engineering, and media sectors. Most of the victims have been Singapore-based companies, but a handful of multinational firms with operations in the country have been affected as well.



The failed Fyre Festival of 2017 serves as a cautionary tale to any who’d ignore warnings from trusted advisers and key stakeholders. Sandra Erez discusses how the Fyre Festival went so disastrously wrong – and the lesson compliance practitioners should take away.

The recent Netflix documentary “Fyre: The Greatest Party that Never Happened” revealed the 2017 fiasco to be a real “trip” – the kind that comes from bad LSD with lingering, long-term effects. Touted as a luxury music festival set on the balmy beaches of the Bahamas, this highly publicized would-be event tantalized millennials with the chance to live the elusive elite lifestyle for a weekend (and talk about it for the rest of their lives). Dangling ads of bikinied supermodels frolicking in the waves succeeded as the bait that would reel thousands of suckers in to this Titanic event – hook, line and sinker. Never mind that it all seemed to be too good to be true; everything is possible if you have the right app, the right hair, the right attitude and are in search of the perfect Instagram backdrop – real or not.

The Fyre Festival launch started off with a splash worthy of any jet ski – selling 95 percent of the costly tickets within 24 hours. Like moths to a flame, the target audience was enticed into the web spun with golden promises, thereby proving to founder Billy McFarland and his team that his idea was on fire. Now, totally pumped and egged on by their initial spectacular success, the staff and partners literally dug their heels (and unfortunately their heads) into the sand to get this show on the road.



Thursday, 07 March 2019 14:21

Liar, Liar, Pants on Fyre

Information travels more quickly than ever.

If a disaster occurs in your community, you will need to work quickly and decisively to ensure that the information that gets to the public is accurate, balanced and useful to the people who need it most. Good crisis communications is the result of a clear and well-developed media relations policy. If you want the headlines to reflect an accurate story, you will need to understand what drives them and how you can establish a beneficial and positive relationship with the press. 

Good Crisis Communication Starts Before the Crisis

A good crisis communications plan will ensure that your organization is prepared to get information out in a way that is helpful to all stakeholders. While you cannot anticipate every potential crisis, most well-constructed plans are flexible enough to address a range of needs.

Begin by considering what sorts of crises are most likely in your community. What will be the potential impact on the people and businesses within your community? For instance, a city in the Midwest can expect periodic severe snow storms. These may cause power outages and leave roads impassable for a period of time. Cities in the southeastern part of the US should be prepared for hurricanes in the warmer half of the year. Areas throughout the country should have plans for manmade disasters that include mass shooter events.



Wednesday, 06 March 2019 15:34

What will your headline be?

(TNS) — The city of Dayton, Ohio received more than 12,000 phone calls during its nearly catastrophic water main break emergency that happened Feb. 13-15.

The cause of the break still isn’t known, since city crews still haven’t been able to inspect the line because of high river levels.

The city said it has been monitoring the river daily and would evaluate again Monday.

Though it wasn’t an especially long emergency, it was an intense time in Dayton.

In 72 hours, Dayton’s water dispatch center received 8,958 calls, or about 20 times the number it receives in a typical week at this time of year. Dispatch handled 393 calls in the last week of January and 463 in the first week of February.

“When this happened, our call centers were completely overwhelmed with phone calls,” said Dayton City Manager Shelley Dickstein.



Problem lies in the manner in which Word handles integer overflow errors in OLE file format, Mimecast says.

The manner in which Microsoft Word handles integer overflow errors in the Object Linking and Embedding (OLE) file format has given attackers a way to sneak weaponized Word documents past enterprises sandboxes and other anti-malware controls.

Security vendor Mimecast, which discovered the issue, says its researchers have observed attackers taking advantage of the OLE error in recent months to hide exploits for an old bug in the Equation Editor component of Office that was disclosed and patched in 2017.

In one instance, an attacker dropped a new variant of a remote access backdoor called JACKSBOT on a vulnerable system by "chaining" or combining the Equation Editor exploit with the OLE file format error. 



When business continuity (BC) professionals hear that the Polar Vortex is collapsing, they aren’t simply worried about the inconvenience of cold temperatures — they are focused on the impact of severe weather to business operations and workforce safety.

Natural disasters and extreme weather resulted in approximately $160 billion worth of damage last year, and reinsurance company Munich RE forecasts this figure will be surpassed in 2019. Abnormal weather patterns — the type that can cause extended cold weather snaps as well as more frequent and intense winter storms — require that BC leaders properly plan for this new weather reality.

And it appears that organizations are acutely aware of the role workforce communications plays with winter weather. In the survey conducted by research firm DRG last year, 47% of decision-makers said severe and extreme weather events are their leading concern when it comes to emergency communications and response — outpacing other events such as active shooters (23%), cybersecurity attacks (13%), IT outages (10%), and workplace violence (6%).

With extreme and severe winter weather raising the stakes for business continuity, it also raises the probability of mistakes: requiring that employees commute into work in unsafe conditions or failing to communicate with your workforce in a timely fashion can elevate human and business risk. Organizations can’t change the weather, but they can mitigate its impact through proper preparation and communication before, during and after adverse winter weather hits. This starts with eliminating six common winter weather mistakes.



Last April, we outlined how the “Tech Titans” (Amazon, Google, and Microsoft) were poised to change the cybersecurity landscape by introducing a new model for enterprises to consume cybersecurity solutions. Security has long been delivered as siloed solutions located on-premises. These solutions were hard to buy, hard to use, and existed in silos. Security leaders were hampered by the technologies’ lack of connectedness, poor user interfaces, and difficulty of administration. Understaffed, stressed security teams struggled to balance the responsibilities of defending their enterprise while updating an ever-expanding toolset.

Cloud adoption by cybersecurity also lags other parts of the enterprise. Many of the security tools enterprises rely on are still deployed on-premises, even as more and more of IT shifts to the cloud. Running counter to other parts of the enterprise, most security teams incur the expense of pulling logs from cloud environments to then process and store them on-premises.

Security analytics platforms such as legacy security information management (SIM) systems struggled to keep pace with the increasing volume and variety of data they process. Unhappy users complained about the inability of their SIMs to scale and the volume of alerts they must investigate.

Enterprises struggling with the cost of data analysis and log storage turned to open source tools such as Elasticsearch, Logstash, and Kibana (ELK) or Hadoop to build their own on-premises data lakes. But then they were unable to glean useful insight from the data they had collected and realized that the expense of building and administering these “free” tools was just as great as the cost of commercial tools.



There are nine enterprise risk management (ERM) activities that at least nine-in-ten of the North American chief risk officers (CROs) we surveyed said that they perform in one way or another over the course of a year. None of these activities is necessarily strategic, but a strategic CRO can put a strategic spin on any of them.

And the more times CROs are heard speaking strategically about their work, the more likely they will be invited to play a role in the future strategic activities of the firm.

In general, the way to make any of these activities more strategic is to shift orientation away from focusing on separating information by risk and toward presenting the information in the context of strategy.  Easy to say, but the nuances of how to do that play out differently for each activity. 

Let’s review these nine activities and see how the seemingly mundane can be strategic.



Slack, the cloud-based set of collaborative tools for teams, is taking over, and changing the way we work for good. Here’s what co-founder Stewart Butterfield has to say about the workplace of the future

Haven’t you heard? Email is dead. At least, that’s what Stewart Butterfield would have you believe. Launched in 2014, his cloud-based ‘virtual assistant’ (which provides team collaboration tools) is doing away with the need for time-consuming and inefficient electronic communication – and changing the way we work altogether.

He might be on to something. Slack is one of the fastest-growing business applications in the last decade. According to its latest figures, there are now more than eight million daily active users across more than 500,000 organisations that use the platform. The company has more than three million paid users and 65 per cent of companies in the Fortune 100 are paid Slack users. More than 70,000 paid teams with thousands of active users connect in Slack channels across departments, borders and oceans.

So when Butterfield and his team share their opinions on the future of work, it’s worth paying attention. Here are five of their predictions.



Monday, 04 March 2019 16:13

The Future of Work According to Slack

Charlie Maclean Bristol explains why developing a playbook for the main types of cyber attacks will help businesses response effectively when an attack occurs. He also provides a checklist covering the areas that such a playbook should include.

When I first thought about cyber playbooks I envisaged the playbook helping senior management or the crisis team make a key decision in a cyber incident, such as, whether or not to unplug the organization from the internet and prevent any network traffic on the organization’s IT network. As this is a critical decision for the organization and the consequences of making the wrong decision are huge, this type of playbook would help the team understand, at short notice, what factors they should consider and the impact of the different decisions they could make.

I was running a cyber exercise a couple of weeks ago and suddenly thought that there was a need for another type of playbook, which is basically a plan for how to deal with different types of cyber attack. As we know, the more planning we do the better prepared we will be for managing an incident, and thinking through how we would respond throws up questions and issues which we can work to solve, without the cold sweat and pressure of the incident taking place.

Cyber response should be in two parts. Firstly, you need an incident management team to manage the consequences of the cyber-attack. This team is separate from a cyber incident response team, who should deal with the technical response, and should concentrate on restoring the organization’s IT service. The organization’s incident management team can be the same as the crisis management team, as they are going to be dealing with the reputation and strategic impacts of the incident.



Oftentimes, responsibility for securing the cloud falls to IT instead of the security organization, researchers report.

Businesses are embracing the cloud at a rate that outpaces their ability to secure it. That's according to 60% of security experts surveyed for Firemon's first "State of Hybrid Cloud Security Survey," released this week.  

Researchers polled more than 400 information security professionals, from operations to C-level, about their approach to network security across hybrid cloud environments. They learned not only are security pros worried – oftentimes they don't have jurisdiction over the cloud.

Most respondents say their businesses are already deployed in the cloud: Half have two or more different clouds deployed, while 40% are running in hybrid cloud environments. Nearly 25% have two or more different clouds in the proof-of-concept stage or are planning deployment within the next year.



Emergency Response? Crisis Management? Business Continuity? Disaster Recovery? How do you know which plan to use during an incident?

It’s often confusing which plan to activate, and who is in charge.

Each plan should clearly identify the scope and responsibilities for executing the plan and have distinct and disparate objectives. During the life-cycle of an incident, all of the plans may be activated – but often only some of them are.  Like many things “it depends”.

Let’s go into more detail on each of the plans and their purpose.



With famous CEOs and big-name proponents of a shorter working week getting their voices heard, Ben Hammersley finds out whether more time out of the office – with the same amount of work to do – really can be achieved

On the face of it, it’s kind of a classic line for a billionaire who owns a tropical island paradise to say. The sort of statement that, when read on a rainy commute home from another 60-hour week would usually result in the newspaper being tossed aside. But, when Sir Richard Branson opined in a blog post that flexible working, with unlimited holiday time, is the way to achieve happiness and success at work, he wasn’t just talking about senior management. It was about everyone. Further still, according to CNBC, he’s recommending even longer weekends:

“Many people out there would love three-day or even four-day weekends,” he reportedly said. “Everyone would welcome more time to spend with their loved ones, more time to get fit and healthy and more time to explore the world.”



Over a billion people around the world have some form of disability. Empowerment and inclusiveness of this large section of the population are therefore essential for a sustainable society, and make up the theme of this year’s International Day of Persons with Disabilities. The Day also contributes to the goals outlined in the United Nations 2030 Agenda for Sustainable Development, which pledges to “leave no one behind”. Many of ISO’s International Standards are key tools to achieving these goals, and there are many more in the pipeline.

From signage in the street to the construction of buildings, ISO standards help manufacturers, service providers, designers and policy makers create products and services that meet the accessibility needs of every person. These include standards for assistive technology, mobility devices, inclusivity for aged persons and much more. In fact, the subject is so vast, we even have guidelines for standards developers to ensure they take accessibility issues into account when writing new standards.

Developed by ISO in collaboration with the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU), ISO/IEC Guide 71, Guide for addressing accessibility in standards, aims to help standards makers consider accessibility issues when developing or revising standards, especially if they have not been addressed before.



Summary FINRA is conducting a retrospective review of Rule 4370 (Business Continuity Plans and Emergency Contact Information), FINRA’s emergency preparedness rule, to assess its effectiveness and efficiency. This Notice outlines the general retrospective rule review process and seeks responses to several questions related to firms’ experiences with this specific rule.



To effectively defend against today's risks and threats, organizations must examine their failings as well as their successes.

In life in general — and, of course, in security specifically — it is helpful to understand when I am the problem or when my organization is the problem. By that, I mean that it is important to discern when an approach to a problem is simply ineffective. When I understand that an approach doesn't work, I can try different things until I find the right solution. This is the definition of repetition.

Redundancy, on the other hand, is when I (or my organization) keeps trying the same approach and nothing changes. It makes no sense to expect different results without a different approach. This, of course, is the definition of redundancy. What can the difference between repetition and redundancy teach us about security? An awful lot.



(TNS) - This would be a first for California: state government buying insurance to protect itself against overspending its budget.

But before you start pelting the politicians and screaming fiscal irresponsibility, know that the budget-busting would be for fighting wildfires.

That puts it in an entirely different category from, say, controversial spending to help immigrants who are here illegally, or trying to register voters at the notoriously jammed DMV.

No sane person is going to gripe about overspending tax dollars to douse a deadly wildfire.

But it does amount to a sucker punch for state budgeters, who might be forced to grab money from other state programs to pay for the firefighting. Fortunately in recent years the robust California economy has been producing state revenue surpluses. So, little problem.



The Watchlist, which contained the identities of government officials, politicians, and people of political interest, is used to identify risk when researching someone.

A data leak at Dow Jones exposed the financial firm's Watchlist database, which contains information on high-risk individuals and was left on a server sans password.

Watchlist is used by major global financial institutions to identify risk while researching individuals. It helps detect instances of crime, such as money laundering and illegal payments, by providing data on public figures. Watchlist has global coverage of senior political figures, national and international government sanction lists, people linked to or convicted of high-profile crime, and profile notes from Dow Jones citing federal agencies and law enforcement. 

The leak was discovered by security researcher Bob Diachenko, who found a copy of the Watchlist on a public Elasticsearch cluster. The database exposed 2.4 million records and was publicly available to anyone who knew where to find it – for example, with an Internet of Things (IoT) search engine, he explained in a blog post.



(TNS) - One of the winter’s strongest storms brought flooding across Northern California’s wine country Wednesday, with no region hit harder than the town of Guerneville and the Russian River Valley, which has been inundated repeatedly over the decades.

Some 3,600 people in about two dozen communities near the river were evacuated Wednesday by the flooding, which prompted the Sonoma County Board of Supervisors to declare a local emergency. Authorities warned that those who chose to stay in their homes could be stuck there for days.

“We have waterfront property now,” said Dane Pitcher, 70, who watched from the third-story window of his bed and breakfast, the Raford Inn in Healdsburg, as rising water pooled to create a 100-acre lake in front of his property. “We’re marooned for all intents and purposes.”

The Russian River, which sat about 10 feet Monday morning, rose an extraordinary 34 feet over two days, said Carolina Walbrun, meteorologist with the National Weather Service in the Bay Area. By Wednesday afternoon, the river had swollen to 44.3 feet — more than 12 feet above flood stage. One rain gauge near Guerneville reported receiving nearly 20. 5 inches of rain in 48 hours by early Wednesday, turning the town into a Russian River island.



The Threat and Risk Assessment (TRA) is one aspect of business continuity that has come under criticism recently. In our opinion, this tool remains highly valuable, provided it is used correctly.

The complaints against the TRA are similar to those expressed about the Business Impact Analysis. People say it isn’t useful, that the information gathered tends to be of low quality, and that it’s too disruptive to the staff of other departments.



(TNS) - Dozens of emergency responders rushed Tuesday around the Capitol Federal building in downtown Topeka during an exercise simulating an active assailant incident.

The training was organized by the Shawnee County, Kan., Department of Emergency Management and included several area agencies.

"We like to think when it happens here, versus if — that way we have that mindset and we're more prepared," said emergency management director Dusty Nichols.

The rescue task force stems from the 1999 Columbine High School shooting and other mass casualty events.



Beware of These Risks to Build Resilience

Steve Durbin, Managing Director of the Information Security Forum (ISF), discusses some of the key risks to organizations today and provides guidance on how to steer clear of them while becoming more resilient.

Until recently, leading executives at organizations around the world received information and reports encouraging them to consider information and cybersecurity risk. Yet not all of them understood how to respond to those risks and the implications for their organizations. A thorough understanding of what happened (and why it is necessary to properly understand and respond to underlying risks) is needed by the C-suite, as well as all members of an organization’s board of directors in today’s global business climate. Without this understanding, risk analyses and resulting decisions may be flawed, leading organizations to take on greater risk than intended.

Cyberspace is an increasingly attractive hunting ground for criminals, activists and terrorists motivated to make money, get noticed, cause disruption or even bring down corporations and governments through online attacks. Over the past few years, we’ve seen cybercriminals demonstrating a higher degree of collaboration amongst themselves and a degree of technical competency that caught many large organizations unawares.



(TNS) - This weekend’s storm has meant long hours for emergency personnel as numerous stranded motorists were in need of rescuing.

According to Mower County, Minn., Sheriff Steve Sandvik, preliminary numbers indicate that over 150 vehicles were abandoned throughout Mower County during the storm.

Many of those vehicles contained people in need of rescue.

The severity of the storm became apparent after a deputy’s squad car and a snowplow sent to rescue a stranded woman and her grandchild both got stuck Saturday night six miles north of Austin. Sandvik had to call in a road grader to get them out.



Six years ago, I noticed a pattern in the inquiry calls I was fielding from clients. At the time, many of them centered around things like BYOD, whether to take away local admin rights from PCs, and other decisions driven by escalating fears of security or compliance risks. If I was able to answer their questions in less than 30 minutes, it gave me an opportunity to ask a question or two of my own: “So you have responsibility for the productivity of 10,000 people, yes?” Their answer was usually some variation of “I guess you could say that.” To which I would then ask: “OK, tell me what you know about how your decisions will impact their motivation or willingness to engage.” After a few moments of uncomfortable silence, their answer was often “I don’t know.” An opportunity was born.

Fast-forward to today, and I’m proud to be sharing with you the results of six years’ worth of research to better understand what really drives employee experience (EX). Spoiler alert: It’s not what you think it is. Ask any group of managers to rank in order of importance the factors they think are most likely to create a positive employee experience. They will say things like recognition, pay-for-performance, important work, great colleagues, or flexibility. Of course these things are important, but they’re not the most important. Psychological research shows that the most important factor for employee experience is being able to make progress every day toward the work that they believe is most important. But when presented with this option, managers will consistently rank it dead last. Clearly, we have a gap.



Thursday, 28 February 2019 15:06

The Employee Experience Index

In the cyber threat climate of the 21st century, sticking with DevOps is no longer an option

In 2016, about eight years following the birth of DevOps as the new software delivery paradigm, Hewlett Packard Enterprise released a survey of professionals working in this field. The goal of the report was to gauge application security sentiment, and it found nearly 100% of respondents agreed that DevOps offers opportunities to improve overall software security.

Something else that the HPE report revealed was a false sense of security among developers since only 20% of them actually conducted security testing during the DevOps process, and 17% admitted to not using any security strategies before the application delivery stage.

Another worrisome finding in the HPE report was that the ratio of security specialists to software developers in the DevOps world was 1:80. As can be expected, this low ratio had an impact among clients that rely on DevOps because security issues were detected during the configuration and monitoring stages, thereby calling into question the efficiency of DevOps as a methodology.



When developing their business continuity plans, office managers, IT leads and risk teams now have a new weapon in their arsenal – flexible workspace

According to a recent global study by Regus, a staggering 73% of respondents claimed that flexible workspace solutions have helped mitigate risks that could threaten the flow of business operations.

As Joe Sullivan, Regus’ Managing Director of Workspace Recovery observes: “Flex space has become a preferred choice when companies establish or upgrade their business continuity plans.

“Today we no longer assume that all the bad stuff happens to someone else,” observes Sullivan. Indeed, according to the 2019 WEF Global Risks Report, “extreme weather events” were cited as the number one risk facing countries globally, followed closely by natural disasters, data fraud and cyber-attacks.



The UK Government has published a new document which highlights some of the expected impacts of a no-deal Brexit on businesses, it concludes that 'lack of preparation by businesses and individuals is likely to add to the disruption experienced in a no-deal scenario.'

Entitled 'Implications for business and trade of a no deal exit on 29 March 2019', the document summarises Government activity to prepare for no deal as a contingency plan, and provides an assessment of the implications of a no deal exit for trade and for businesses, given the preparations that have been made.

Some of the highlights from the document include:



Because many organizations tend to overlook or underestimate the threat, social media sites, including Facebook, Twitter, and Instagram, are a huge blind spot in enterprise defenses.

Social media platforms present far more than just a productivity drain for organizations.

New research from Bromium shows that Facebook, Twitter, Instagram, and other high-traffic social media sites have become massive centers for malware distribution and other kinds of criminal activity. Four of the top five websites currently hosting cryptocurrency mining tools are social media sites.

Bromium's study also finds one in five organizations have been infected with malware distributed via a social media platform, and more than 12% already have experienced a data breach as a result. Because many organizations tend to overlook or underestimate the threat, social media sites are a huge blind spot in enterprise defenses, the study found.



(TNS) - Aurora, Ill., police released audio of the 911 calls and emergency dispatch made during the Feb. 15 Aurora warehouse shooting at Henry Pratt Co. that left five employees dead.

Police communications detail the hour-long manhunt that injured six police officers, who were also identified by the department Monday.

The shooter — Gary Martin — began firing either during or shortly after a meeting where he was fired from the job he held for 15 years. He then retreated into the back of the 29,000-square-foot facility at 641 Archer Ave. and was eventually killed in a shootout with Aurora and Naperville police.

Five officers quickly responded to dispatch calls. One said “we are moving north through the warehouse. We haven’t heard anything yet,” when suddenly another officer screams that shots were fired outside in a bay area.



As more organizations move to the public cloud and to DevOps and DevSecOps processes, the open source alternative for host-based intrusion detection is finding new uses.

Used by more than 10,000 organizations around the world, OSSEC has provided an open source alternative for host-based intrusion detection for more than 10 years. From Fortune 10 enterprises to governments to small businesses, OSSEC has long been a standard part of the toolkit for both security and operations teams.

As more organizations move to the public cloud infrastructure and to DevOps and DevSecOps processes, OSSEC is finding new use cases and attracting new fans. Downloads of the project nearly quadrupled in 2018, ending the year at more than 500,000. Much of this new activity was driven by Amazon, Google, and Azure public cloud users.

While many security and operations engineers are familiar with OSSEC in the context of on-premise intrusion detection, this article will focus on the project's growing use and applicability to cloud and DevSecOps use cases for security and compliance.



Wednesday, 27 February 2019 14:43

A 'Cloudy' Future for OSSEC

At least 21 individuals died during the 2019 Polar Vortex—including two university students.

The University of Vermont and the University of Iowa both experienced deaths suspected to be due to exposure to sub-zero temperatures. These universities are no strangers to severe winter weather, but these extreme weather conditions are becoming more common, and campuses must prepare.

It’s impossible to reliably predict every emergency. But weather events are one crisis that can be anticipated, based on your region and common weather threats experienced. Universities and college campuses are also often in the unique position of coordinating with internal safety officials and campus police along with community safety officials. A weather preparedness plan puts processes in place to protect your students, faculty, and institution. By having a weather preparedness plan ready for deployment, your campus can react swiftly to threats—and substantially reduce the risk of injury or even death.



Weather phenomenon isn’t the only concern when considering an emergency plan.

OSHA defines workplace emergencies as “an unforeseen situation that threatens your employees, customers, or the public; disrupts or shuts down your operations; or causes physical or environmental damage” which can include:

  • Floods
  • Hurricanes
  • Tornadoes
  • Fires
  • Toxic gas releases
  • Chemical spills
  • Radiological accidents
  • Explosions
  • Civil disturbances
  • Workplace violence resulting in bodily harm and trauma

Keeping employees safe during a critical event is the top priority for any company, so consider these five steps to ensure trauma is kept at a minimum.



The right to be forgotten is a fundamental aspect of both the GDPR and CCPA privacy laws; but its impact on personal information in data backups has yet to be tested. Bill Tolson explains the issue and provides some practical advice.

A great deal has been written about the GDPR and CCPA privacy laws, both of which includes a ‘right to be forgotten’. The right to be forgotten is an idea that was put into practice in the European Union (EU) in May 2018 with the General Data Privacy Regulation (GDPR).

The main trigger for this radical step came from the business practices of major Internet companies such as Google and Facebook (among others) around how they collect and use personal data they collect and subsequently sell to other companies for marketing and sales purposes. Additionally, as ‘fake news’ spread, those affected found it was almost impossible to get the Internet companies (including news publishers) to fix or remove the false data.  Because of this, the GDPR and CCPA were established to ensure end-user rights to know what data is being collected on them, how it's being used, and if it's being sold and to whom. The right to be forgotten includes the right to have privacy information (PI) fixed or removed, quickly.

There continues to be a debate about the practicality of establishing a right to be forgotten (which amounts to an international human right) due in part to the breadth of the regulations and the potential costs to implement. Additionally, there continues to be concern about its impact on the right to freedom of expression. However, most experts don’t foresee these new privacy rights disappearing, ever.



(TNS) - Cambria County officials are making efforts to ensure that first responders can communicate effectively and consistently with each other when it matters most – during emergency calls.

An overhaul of the county’s 911 radio system got rolling last March, when the Cambria County commissioners approved a contract with Mission Critical Partners – tasked with analyzing the current 911 network, and tracking immediate fixes and future design enhancements.

The $201,870 contract covers network design services, along with a $16,500 equipment allowance.

Robbin Melnyk, county 911 coordinator, said the coverage area of the current radio system has been affected by tree growth and the use of analog radios instead of digital units.

That has created situations in which first responders can’t communicate with dispatchers at the 911 center or with each other on emergency scenes.



Financial software company Intuit discovered that tax return info was accessed by an unauthorized party after an undisclosed number of TurboTax tax preparation software accounts were breached in a credential stuffing attack.

A credential stuffing attack is when attackers compile username and passwords that were leaked from previous security breaches and use those credentials to try and gain access to accounts at other sites. This type of attack works particularly well against users who use the same password at every site.



Despite the openness of the Android platform, Google has managed to keep its Play store mainly free of malware and malicious apps. Outside of the marketplace is a different matter.

In 2018, Google saw more attacks on users' privacy, continued to fight against dishonest developers, and focused on detecting the more sophisticated tactics of mobile malware and adware developers, the Internet giant stated in a recent blog post. 

Google's efforts — and those of various security firms — highlight that, despite ongoing success against mobile malware, attackers continue to improve their techniques. Malware developers continue to find news ways to hide functionality in otherwise legitimate-seeming apps. Mobile applications with potentially unwanted functionality, so-called PUAs, and applications that eventually download additional functionality or drop malicious code, known as droppers, are both significant threats, according to security firm Kaspersky Lab.

For Google, the fight against malicious mobile app developers is an unrelenting war to keep bad code off its Google Play app store, the firm said. 



The reports of the death of the field of business continuity have been greatly overstated. But those of us who work in it do have to raise our performance in a few critical areas.

For some time, reports predicting the imminent demise of the field of business continuity have been a staple of industry publications and gatherings.

The most prominent of these have been the manifesto and book written by David Lindstedt and Mark Armour. For an interesting summary and review of their work, check out this articleby Charlie Maclean Bristol on BC Training.



Friday, 22 February 2019 15:25

Business Continuity, R.I.P.?

Recommended best practices not effective against certain types of attacks, they say.

Automated online password-guessing attacks, where adversaries try numerous combinations of usernames and passwords to try and break into accounts, have emerged as a major threat to Web service providers in recent years.

Next week, two security researchers will present a paper at the Network and Distributed System Security Symposium (NDSS Symposium) in San Diego that proposes a new, more scalable approach to addressing the problem.

The approach — described in a paper titled "Distinguishing Attacks from Legitimate Authentication Traffic at Scale" — is designed specifically to address challenges posed by untargeted online password-guessing attacks. These are attacks where an adversary distributes password guesses across a very large range of accounts in an automated fashion.



Safe Web Use Practices for Investment Firms

Regulating web use for employees via compliance handbook and URL filters for blacklisted (bad) and whitelisted (good) online resources has failed to improve compliance. Authenic8’s John Klassen discusses how firms are increasingly turning to a centrally managed and monitored cloud browser to regain control, unobtrusively maximize visibility into employees’ web activities and ensure compliance without sacrificing productivity or risking an internal backlash.

Pressure from the SEC and state authorities has increased over the past two years to remediate areas of cybersecurity weakness. Yet regulators and compliance professionals agree that alarming gaps remain in how regulated financial services firms use the web.1  Many firms still struggle to effectively control, secure and monitor employee web activities.

So what’s the holdup?

Industry insiders point to the ubiquitous use of a tool that was conceived almost 30 years ago: the locally installed browser. Many firms still use a traditional “free” browser for all their web activities, its inherent architectural flaws and vulnerabilities notwithstanding. At the same time, CCOs and IT are also increasingly aware of the risks associated with local browser use:



UK businesses are most concerned about the susceptibility of 5G to cyber attacks according to EY’s latest Technology, Media and Telecommunications (TMT) research.

40 percent of respondents are worried about 5G and cyber attacks while a similar percentage (37 percent) were cautious over the security of Internet of Things (IoT) connectivity. The survey also found that while 5G investment is set to catch-up with Internet of Things spend over the next two years, doubts surround its readiness and relevance. Just over one third of respondents feared that 5G is too immature, while 32 percent believe it lacked relevance to overall technology and business strategy.

The survey of 200 UK businesses looked at attitudes towards the adoption of 5G and IoT technology as well as organizations’ expectations from tech suppliers.



The constant stresses from advanced malware to zero-day vulnerabilities can easily turn into employee overload with potentially dangerous consequences. Here's how to turn down the pressure.

Cybersecurity is one of the only IT roles where there are people actively trying to ruin your day, 24/7. The pressure concerns are well documented. A 2018 global survey of 1,600 IT pros found that 26% of respondents cited advanced malware and zero-day vulnerabilities as the top cause for the operational pressure that security practitioners experience. Other top concerns include budget constraints (17%) and a lack of security skills (16%).

As a security practitioner, there is always the possibility of receiving a late-night phone call any day of the week alerting you that your environment has been breached and that customer data has been publicized across the web. Today, a data breach is no longer just a worse-case scenario; it's a matter of when, a consequence that weighs heavily on everyone — from threat analyst to CISO.



tabletop exercisePreparing a business for the unknown requires a series of important steps to protect your employees and your operations. For many business owners, this foundation starts with an emergency plan and grows to include a business continuity plan, an inclement weather policy, and perhaps even a lone worker policy to keep employees safe.

So, you’ve made your emergency plans and identified the best people to lead your teams through each phase. Now, it’s time to practice with the low-cost but high-impact emergency planning event known as a tabletop exercise.



What You Need to Know for 2019 – and Beyond

In the fast-moving world of cybersecurity, predicting the full threat landscape is near impossible. But it is possible to extrapolate major risks in the coming months based on trends and events of last year. Anthony J. Ferrante, Global Head of Cybersecurity at FTI Consulting, outlines what organizations must be aware of to be prepared.

In 2018, cyber-related data breaches cost affected organizations an average of $7.5 million per incident — up from $4.9 million in 2017, according to the U.S. Securities and Exchange Commission. The impact of that loss is great enough to put some companies out of business.

As remarkable as that figure is, associated monetary costs do not include the potentially catastrophic effects a cyberattack can have on an organization’s reputation. An international hotel chain, a prominent athletic apparel company and a national ticket distributor were just three of several organizations that experienced data breaches in 2018 affecting millions of their online users — incidents sure to cause public distrust. It’s no coincidence that these companies were targeted — all store valuable user data that is coveted by hackers for nefarious use.

These events and trends should serve as eye openers for what’s ahead this year, as malicious actors are becoming more sophisticated and focused with their attacks. Consider these 10 predictions over the next 10 months:



Thursday, 21 February 2019 17:01

10 Corporate Cybersecurity Predictions

Companies think their data is safer in the public cloud than in on-prem data centers, but the transition is driving security issues.

More business-critical data is finding a new home in the public cloud, which 72% of organizations believe is more secure than their on-prem data centers. But the cloud is fraught with security challenges: Shadow IT, shared responsibility, and poor visibility put data at risk.

These insights come from the second annual "Oracle and KPMG Cloud Threat Report 2019," a deep dive into enterprise cloud security trends. Between 2018 and 2020, researchers predict the number of organizations with more than half of their data in the cloud to increase by a factor of 3.5.

"We're seeing, by and large, respondents are having a high degree of trust in the cloud," says Greg Jensen, senior principal director of security at Oracle. "From last year to this year, we saw an increase in this trust."



ASSP TR-Z590.5-2019 provides guidance from safety experts on proactive steps businesses can take to reduce the risk of an active shooter, prepare employees and ensure a coordinated response should a hostile event occur. It also provides post-incident guidance and best practices for implementing a security plan audit.

Active shooter fatalities spiked to 729 deaths in 2017, more than three times our country’s previous high. A business must know where its threats and vulnerabilities exist. Our consensus-based document contains recommendations on how a business in any industry can better protect itself in advance of such an incident. Based on the collaborative work of more than 30 professionals experienced in law enforcement, industrial security and corporate safety compliance, the report aims to drive a higher level of preparedness against workplace violence.



A new toolkit developed by the Global Cybersecurity Alliance aims to give small businesses a cookbook for better cybersecurity.
Small and mid-sized businesses have most of the same cybersecurity concerns of larger enterprises. What they don't have are the resources to deal with them. A new initiative, the Cybersecurity Toolkit, is intended to bridge that gulf and give small companies the ability to keep themselves safer in an online environment that is increasingly dangerous.

The Toolkit, a join initiative of the Global Cyber Alliance (GCA) and Mastercard, is intended to give small business owners basic, usable, security controls and guidance. It's not, says Alexander Niejelow, senior vice president for cyber security coordination and advocacy and MasterCard, that there's no information available to the small business owners. He points out that government agencies in the U.S. and the U.K. provide a lot of information on cybersecurity for businesses.

It's just that, "It's very hard for small businesses to consume that. What we wanted to do was remove the barriers to effective action," he says, and go beyond broad guidance to giving them very specific instructions presented, "…if at all possible in a video format and clear easy to use tools that they could use right now to go in and significantly reduce their cyber risk so they could be more secure and more economically stable in both the short and long term."



Bankers around the world are rightly worried about the threats posed by digital disruptors getting in between them and their retail banking customers. But Forrester’s newest research reveals that executives should be just as worried — perhaps even more worried — about another market that is being upended: Small business banking.

Small and medium-sized businesses (also called small and medium-sized enterprises or SMEs) are crucial sources of revenues and profits at most banking providers, so the prospect of bank brands losing their relevance among SMEs should keep bankers awake at night.

Here are just a few of the insights you’ll find in our new research report:



New data from CrowdStrike's incident investigations in 2018 uncover just how quickly nation-state hackers from Russia, North Korea, China, and Iran pivot from patient zero in a target organization.

It takes Russian nation-state hackers just shy of 19 minutes to spread beyond their initial victims in an organization's network - yet another sign of how brazen Russia's nation-state hacking machine has become.

CrowdStrike gleaned this attack-escalation rate from some 30,0000-plus cyberattack incidents it investigated in 2018. North Korea followed Russia at a distant second, at around two hours and 20 minutes, to move laterally; followed by China, around four hours; and Iran, at around five hours and nine minutes.

"This validated what we've seen and believed - that the Russians were better [at lateral movement]," says Dmitri Alperovitch, co-founder and CTO of CrowdStrike. "We really weren't sure how much better," and their dramatically rapid escalation rate came as a bit of a surprise, he says.

Cybercriminals overall are slowest at lateral movement, with an average of nine hours and 42 minutes to move from patient zero to another part of the victim organization. The overall average time for all attackers was more than four-and-a-half hours, CrowdStrike found.



Navigating the Information Age Without Saving Everything

Data retention is a persistent challenge for in-house counsel, but developing workable information governance policies and procedures needn’t be a taxing exercise; in fact, they can generate measurable cost savings to the company. Here, Buckley LLP’s Caitlin Kasmar highlights the importance of being equipped with the right advice at the right time to save in-house counsel the stress of dealing with the challenges of document retention compliance.

The posture of in-house counsel toward information governance and data retention is in the midst of a noticeable and rapid shift from “are we retaining the right information?” to “please, please tell me I can get rid of some of this stuff.”

Those urgent pleas are fed not by data storage costs, which continue to decline, but by savvy in-house lawyers anticipating a subpoena or lawsuit, confronting a decade’s worth of retained emails and calculating compliance costs.

How are in-house counsel expected to advise their business clients on data retention when, in the typical company, numerous legal holds have piled up over time, executives may be effectively exempt from whatever retention/destruction policy is in place and no audit process exists to ensure records are actually deleted in compliance with the policy? The right advice at the right time can save in-house counsel the stress of dealing with these tricky — and, let’s face it, not particularly glamorous — issues.



Tuesday, 19 February 2019 15:25

'Do I Really Need To Keep This?'

(TNS) - Peggy Wood kept sitting up in bed.

She snatched a legal pad and added to a scattered list of things she used to own.

She imagined she was at her old desk in the Driftwood Inn, and jotted what she saw. Six glaze brushes, an embroidery machine, dressmaker’s scissors. A Nikon camera. Lights for that camera, and a backpack. Perfume she spritzed on before going out to shoot photos.

Each item was a chain link in a new insurance filing after Hurricane Michael ruined the Inn she and her family spent four decades building.

The Woods had received a little more than $2 million in insurance payments by January, mostly from flood policies. They still hoped for at least another $1 million from wind coverage but did not know how much it would cost to rebuild the sprawling motel and its outbuildings, 24 units in all.

$3 million? $10 million?



Rich Campagna explores the security and compliance risks associated with data stored in – and accessible from – cloud applications, setting out best practices for assuring end-to-end protection.

With cloud adoption rapidly expanding across an immense range of industries, enterprises around the globe are eagerly embracing the benefits that can be gained from moving their mission-critical services to the public cloud.

Despite the fact that major cloud vendors invest heavily in security, with Microsoft alone dedicating more than $1 billion a year to internal security investments, companies need to understand the hidden risks associated with migrating to the cloud.

That entails senior company executives coming to grips with the security and compliance risks associated with data stored in – and accessible from – cloud applications, and who takes responsibility should the unthinkable occur.



Tuesday, 19 February 2019 15:22

Mind the gap: cloud security best practices

(TNS) - Cambria County Commissioners approved two contracts Thursday that will allow for new connections with other counties and improve existing ones when it comes to 911 communication.

During a regular meeting, the commissioners unanimously approved a 911 fund statewide interconnectivity grant with the Pennsylvania Emergency Management Agency (PEMA), for $439,653.

Robbin Melnyk, county 911 coordinator, said this money will be used to upgrade and renew licenses for two large pieces of equipment purchased by Cambria County and 14 surrounding counties a few years ago.

A second grant of $96,607 will go toward maintenance and monitoring of Cambria County’s software, connecting it with Blair and Somerset counties, Melnyk said.



Let’s be honest: Everything related to a traditional crisis is more likely to cause heartburn than joy.

When most people think of a traditional crisis plan, they envision something “comprehensive” that will prepare them for every conceivable situation. They think of an exhaustive process of research and planning and bulky binders filled with color-coded tabs.

The reality is far simpler. You cannot prepare for every situation. Trying to do so is a fool’s errand. The best plan provides a view from 30,000 feet. It defines the broad strokes of what to say and do (or not), determines who’s in charge of what, specifies who speaks for the organization and why it’s important not to talk out of school.

The main barrier to green-lighting a crisis plan is inertia. For two reasons. It seems arduous,  which causes procrastination. And you have so many other priorities competing for your attention and resources.

It’s time to change things up and declutter traditional crisis plans!



No longer can privacy be an isolated function managed by legal or compliance departments with little or no connection to the organization's underlying security technology.

Recent advancements in machine learning and big data analytics have made data more important today than ever before. Companies are now investing heavily in protecting their customers' data; for instance, Facebook has pledged to double its safety and security team to 20,000 people.

Since the introduction of Europe's General Data Protection Regulation (GDPR) in 2018, data protection officers (DPOs) have become the subject of the latest hiring frenzy. Large organizations that are mandated to hire a DPO based on the GDPR's criteria are struggling to find the right person for the job. But how does a DPO fit into the typical security organization?

At the end of the day, a DPO should report directly to top management on all regulation and privacy topics. As such, the perfect candidate must have in-depth knowledge of GDPR and other regulations. Your DPO should also view the responsibilities of GDPR compliance as an opportunity to drive your business forward.



Monday, 18 February 2019 16:56

Privacy Ops: The New Nexus for CISOs & DPOs

Preventing Legal Risks and Liabilities

The #MeToo movement has hammered home for employers the critical importance of keeping sexual harassment out of the workplace. However, a recent federal court case underscores how sexual harassment can occur in ways that defy what many employers might think of as the typical pattern. The ruling by the U.S. District Court for the Eastern District of Pennsylvania comes in a case that has nothing to do with a male boss or co-worker behaving inappropriately with a female colleague. It hinges instead on allegations that a supervisor failed to properly respond to sexual harassment of an employee by a non-employee.

That might bring to mind the Hollywood trope of a hardworking waitress forced to regularly endure catcalls or worse by a male customer, but Hewitt v. BS Transportation defies even this familiar scenario. It involves a lawsuit over alleged male-to-male sexual harassment in the world of big rigs and fuel refineries. In court documents, truck driver Carl Hewitt alleges that his supervisor at BS Transportation failed to take prompt remedial action in response to sexual harassment of Hewitt by a male worker at a fuel distribution company’s refinery. Hewitt routinely traveled to the Pennsylvania facility to pick up fuel bound for NASCAR racecars.



Businesses don't have sufficient staff to find vulnerabilities or protect against their exploit, according to a new report by Ponemon Institute.

For enterprise IT groups, responding to the volume of new vulnerabilities is growing more difficult – compounded by a chronic lack of skilled cybersecurity professionals to deal with the issues.

That is one of the major conclusions reached in a new report, "Challenging State of Vulnerability Management Today: Gaps in Resources, Risk and Visibility Weaken Cybersecurity Posture," published by Ponemon Institute and sponsored by Balbix.

When asked about the difficulties of maintaining an adequate security posture, 68% of the more than 600 cybersecurity professionals surveyed listed "staffing" as a primary issue. These staffing shortages don't exist exclusively at small organizations, either, with 72% of those surveyed from organizations with more than 1,000 employees.



Backup technology has evolved over the years, but the time has come to take a completely fresh approach, says Avi Raichel. In this article Avi explains: Why backup is a CTO concern; What CTOs need to do to update the backup strategies in place; How CTOs can help the business become IT resilient.

It’s no secret that backup is one of the most important things that a business can invest in, and it’s because of this that the evolution of backup has been such a grand one. The very first computer backups were made on to large reels of magnetic tape (punch cards), and have consistently evolved – from tape, to spinning disk, and then on to flash. However, what hasn’t changed with backup is the central idea of creating ‘golden copies’ of data, to be used ‘just in case’.

This idea is now, arguably, archaic. These traditional backups that only provide a snapshot in time are no longer compatible with the modern times. In this age, businesses, particularly digital ones, need to be ‘always-on’ – 24/7, 365 days a year. Because of this, the requirement for recovery point objectives (RPOs) of seconds, and recovery time objectives (RTOs) of minutes is essential.

Essentially, a business needs to be able to recover as quickly as possible from the second it went down – not from a backup made the night before. This dependence on periodic backups, rather than continuous data protection, may be why nearly half of businesses have suffered an unrecoverable data event over the last three years according to the latest IDC State of IT Resilience report.



Steering Clear of Antitrust Pitfalls

Knowing how to engage in competitor interactions is often more art than science. There are few clear lines of conduct to guide information exchanges made for legitimate business reasons. But broad principles do exist to help you consider your options carefully. Vedder Price’s Brian McCalmon discusses.

Throughout the country, sales managers, supervisors and executives attend antitrust trainings with varying degrees of regularity and detail. Antitrust as a corporate and individual pitfall is familiar to most doing business in the United States and abroad. If asked, most sales executives and line personnel can list the most dangerous and easily spotted scenarios to avoid: Don’t ask competitors about their pricing plans; don’t talk to competitors about customers; if competitors begin to discuss forbidden topics in a trade association meeting, stand up, announce your departure for the record and abruptly exit. This is all Antitrust 101.

But there is an Antitrust 102 and 103, and situations calling for a deeper understanding of antitrust may be thrust upon senior executives before they have had time to digest the consequences of a bad choice in the moment. Some risks may be so unobvious that the executive may never see the antitrust consequences at all. And a healthy respect for the antitrust laws, coupled with a poor understanding of them, has led to the unnecessary stifling of potentially efficient corporate initiatives. A deeper understanding of how communications with competitors, suppliers and customers may violate competition law can reduce risk and allow more efficient and procompetitive arrangements to flourish.



What does a business continuity or disaster recovery plan consist of? In a nutshell, it’s what needs to happen in case you can’t continue normal operations or business due to an “activity” that may have affected your organization. I am not trying to minimize this in the least. That’s just the tip of the iceberg. We NEED plans. We need to know what to do so that when we have to make critical decisions, the information is as our finger tips (especially when it’s an automated tool). Building these plans is vital to the survival of the business, should something occur. Most of our organizations are regulated and required to have plans. It’s not only a type of insurance policy, but it makes us feel better knowing it’s in place…but what happens when you need to activate that plan? Just as critical as the plan itself are the people needed to respond and assist in the recovery efforts. People execute the plan. Someone needs to flip the switch. Without people, your effort, time and planning will not be much help.

With that said, we need to make sure we prepare our employees, so they know what to expect and what is expected. How do we do that? We teach them. We exercise the plans and involve those people.

Most organizations don’t do full-scale exercises with their entire staff. It costs a lot of money, resources and takes up a lot of time from the work day. This would be the most desirable type of exercise and something we should all aim to achieve. If you can conduct something like this, that’s fantastic! If not, consider starting by setting up a table top exercise to walk through what’s currently in place in your plans.



A wireless device resembling an Apple USB-Lightning cable that can exploit any system via keyboard interface highlights risks associated with hardware Trojans and insecure supply chains.

During a month-long hiatus between jobs, Mike Grover challenged himself to advance a project he'd been working on for over a year: Creating a USB cable capable of compromising any computer into which it's inserted.

His latest iteration, the Offensive MG or O.MG cable, resembles an Apple-manufactured Mac USB-Lightning cable but incorporates a wireless access point into the USB connector, allowing remote access from at least 100-feet away, according to Grover. A video demonstration shows Grover taking control of a MacBook and opening up Web pages from his phone.

The cable takes advantage of a known weaknesses. To make keyboard, mice, and other input devices as easy to connect as possible, operating system makers have made computers accept the identification, through the Human Interface Device (HID) protocol, of any device plugged into a USB port. An attacker can use the weakness to create a device that acts like a keyboard to issue keystrokes, or a mouse to issue clicks.



(TNS) - It’s been a year since the Valentine’s Day murder of 17 students and staff members and the wounding of 17 others at Marjory Stoneman Douglas High School in Parkland, Florida.

Since then, schools around the country have taken steps to beef up security.

In this area, several schools have made great strides to improve the safety of the students and teachers.

Many of the improvements deal with how people enter school buildings.

“The number one thing that we’ve done: we put a kiosk system in where when you come in you have to bring your [driver’s] license in now. We know everybody who comes in and out of our building. So will that stop a shooting? No, but we actually have a better understanding of who is going to be in our building or not,” said Mel Rentschler, superintendent at Allen East schools.



Friday, 15 February 2019 15:06

Preparing for the Next School Shooting

Doron Pinhas looks at the common factors behind various high-profile technology outages in 2018 and proposes a practical approach which will help organizations reduce unplanned downtime in 2019.

Flying these days is almost never a pleasure, but in 2018, it was a downright nightmare with dozens of glitches and outages that kept planes grounded. 2018 wasn't such a great year for other industry sectors as well. Financial service customers also had a rough year accessing their funds and performing urgent financial transactions. In the UK, for example, banks experienced outage after outage. Three of Britain's biggest banks - HSBC, Barclays and TSB - all experienced outages on a single day, making online banking impossible, and there were dozens of other incidents peppered throughout the year.

And if your business lives on cloud platforms and SaaS, you might have found yourself running ragged at times trying to access your IT with all of the major cloud platforms suffering from outages throughout the year as well.

It may be 2019 now, but the fundamental gaps that led to those service disruptions haven't been resolved, so we can expect more such outages this year, and probably every year until companies figure it out – which, if you’re a business continuity or IT professional, raises the question: what should I do to avoid outages?



Some have even turned to alcohol and medication to cope with pressure.

A quarter of chief information security officers (CISOs) suffer from mental and health disorders as a result of tremendous and growing work pressures, a new survey shows.

Contributing to the strain are concerns about job security, inadequate budget and resources, and a continued lack of support from the board and upper management.

Domain name registry service provider Nominet recently polled 408 CISOs working at midsize and large organizations in the United Kingdom and United States about the challenges they encounter in their jobs.

A whopping 91% of the respondents admitted to experiencing moderate to high stress, and 26% said the stress was impacting them mentally and physically. A troubling 17% of the CISOs who took Nominet's survey admitted to turning to alcohol and medication to deal with the stress, and 23% said their work was ruining personal relationships.



Paul Barry-Walsh argues that as complexity increases in society, so do interdependencies. To prevent cascading disasters, organizations need to implement firebreaks which will ensure that they do not become the weak link in the supply chain.

There is a characteristic which is self-evident to the professionals in this field, that is, as we develop as a society, we become increasingly reliant on more and more suppliers delivering products or services. Should just one component of the supply chain be disrupted then this service or product cannot be delivered. This can result in chaos. This is simply a manifestation of Adam Smith’s contention that the increased division of labour allows increasing output. However, with ever more suppliers, and the implementation of just in time production, the loss of just one small component disrupts the entire chain. This is as true for services as it is for manufacturing and after Adam Smith we should perhaps refer to this as ‘Adams Law’.

To illustrate this, imagine yourself to be a Venetian banker in the 16th century. He would need ledges quills and ink, possibly a desk and to operate in a secure environment, under the rule of law, but that’s about it. Now consider his modern counterpart. Just providing the most basic of modern day services the banker needs to operate both within and under the rule of law, she/he needs sophisticated computers, needs a base to operate from, needs communication devices and needs an army of people to run this operation: accountants, data entry, lawyers, compliance people and then HR to manage them.

That’s a complex web of people and products just to do the simplest banking operation. This complexity brings with it vulnerability; if staff are denied access to the office, if there is no electricity, (or water) then the organization cannot function. If you cannot function, there will be a knock-on effect for the counterparties, due to the interconnectedness of our society. If just one bank fails, this has a domino effect on other financial institutions and counterparties.



(TNS) — Garfield County, Okla., Sheriff's Office is offering training in active-attack response to area schools and also will provide the course to employees at the county courthouse.

Acting Sheriff Jody Helm said this is the third year the sheriff's office has offered training to county schools. Previous training topics concerned weapons in schools and drugs in schools.

"They've been really receptive," Helm said.

Deputy Lloyd Cross presented the training, from the Advanced Law Enforcement Rapid Response Training at Texas State University, Wednesday to the staff of Kremlin-Hillsdale High School.

Cross said the goal was to present the information to administrators and teachers and not determine policy for the school system.



Many times when we talk abut communications plans and campaigns, we focus on the tactics. Which makes sense – there are the things we can see. The clever social media post, the direct mail piece, the slick website. But the true way to evaluate a communications plan or marketing campaign is through measurement.

My favorite way to illustrate the different types of measures and how they work comes from the book Effective Public Relations, Ninth Edition. This is the book I used to study for my Accreditation in Public Relations, and it’s still on my shelf, dog-eared and bursting with post-it notes. I have adapted their graphic into my own, which you can see here:



Friday, 15 February 2019 14:57

How to measure communications plan success

When each member of your security team is focused on one narrow slice of the pie, it's easy for adversaries to enter through the cracks. Here are five ways to stop them.

Today, enterprises consist of complex interconnected environments made up of infrastructure devices, servers, fixed and mobile end-user devices and a variety of applications hosted on-premises and in the cloud. The problem is traditional cybersecurity teams were not designed to handle such complexities. Cybersecurity teams were originally built around traditional IT—with a specific set of people focused on a specific set of tools and projects.

As enterprise environments have grown, this siloed approach to cybersecurity no longer works. When each member of your security team is only focused on one narrow slice of the pie, it’s far too easy for adversaries to enter through the cracks. The following are critical steps chief information security officers (CISOs) must take in order to establish a dream team for the new age of cybersecurity.



Truth is, in most of the reports we write about how to prepare your company for the future, two major recommendations always come out: Get your C-level leaders on board, and cultivate a culture that can transform your business. The first is crucial yet obvious, and I’ve grown tired of writing it. The second, culture, is equally obvious, but it’s also huge. Yes, we have statistically measured the role of culture in successful digital transformations and found that culture is the strongest predictor of whether you’ll make it. But culture is enormous, and changing it can feel overwhelming.

Today we offer a lifeline of incredible value. Culture can encompass a myriad of things, but it is best measured at the level of individual employees. Do they like being there? Do they support the mission of the organization? Do they feel supported in trying to accomplish the goals of the company? All of these things matter, but today the responsibility for engaging employees is diffused across the org. HR helps but focuses on narrow metrics while not touching on the business strategy. Leaders occasionally try to motivate with enthusiasm, but they don’t rigorously account for the impact of their demands on the employee base. And when you add technology, it’s clearly not IT’s job to make sure people feel like the tech is helping them as much as it’s helping the customer. Drowning yet?

That’s where our lifeline comes in: “Introducing Forrester’s Employee Experience Index.” Rather than simply telling you to go engage your employees, we’ve systematized the process. We’ve spent two years surveying more than 13,800 employees in seven countries. Drawing from the best of three decades of organizational psychology research, we’ve constructed a tool that identifies what an engaged worker looks like and then worked backward from there to figure out what factors either help or hurt employee engagement. The result is a clear blueprint for inspiring, empowering, and enabling your employee base. 



Did “data analytics” ruin baseball? Depends on whom you ask: the cranky old man in a Staten Island bar or the nerd busy calculating Manny Machado’s wRC+ (it was 141 in 2018, if you cared to know). 

What is indisputable, though, is that the so-called “Sabermetrics revolution” rapidly and fundamentally changed how the game is played – this is not your grandpa’s outfield!

And data is eating the whole world, not just baseball. Now it’s coming for the legal profession, of all places. The Financial Times recently published an article on how law analytics companies are using statistics on judges and courts to weigh how a lawsuit might play out in the real world. One such company does the following (per the article): 



Friday, 15 February 2019 14:53


Findings from Dun & Bradstreet

According to a report by Dun & Bradstreet, compliance and procurement professionals indicate that fraud tops the list of challenges, and technological advances exacerbate the problem. While technology is an enabler to these industries by creating the potential for improved efficiency and data management, in some instances, it may be putting organizations at greater risk for fraud if not implemented properly. Brian Alster discusses the approach compliance leaders should take to protect the organization.

Compliance professionals didn’t have it easy in 2018; significant regulations spanned industries globally – touching finance, trade and data in a big way. Among the related challenges of this business environment, the risk of fraud remains near the top of the list for many companies, a majority of whom have seen incidences of fraud negatively impact their business. Detection methods to combat fraud evolve over time, but so, too, do the fraudsters, turning the situation into a never-ending game of cat and mouse.

A majority (72 percent) of respondents to the second Dun & Bradstreet Compliance and Procurement Sentiment Report say fraud has had an impact on their company’s brand. In an effort to uncover the top issues and concerns among both compliance and procurement professionals, Dun & Bradstreet surveyed more than 600 professionals from the U.S. and U.K., delving into a range of questions about their roles, as well as their impressions of the industry overall. With this second report, we were able to measure changes in overall sentiment compared with the benchmark conducted earlier last year, and we dove deeper into fraud concerns and the use of technology.



Friday, 15 February 2019 14:46

Fraud A Top Concern For Compliance Leaders

Extra, extra! Read all about it!

TOPO declares that 86% of account-based organizations report improved close rates, and 80% say account-based strategies are driving increased customer lifetime value!

TribalVision channels the ITSMA when it reports that companies implementing account-based marketing (ABM) strategies typically see a 171% increase in annual contract value!

Really? Wow — huh, haven’t seen that way from where I’m standing.

From my (tenuous?) perch atop Forrester’s ABM research pile, it looks like FOMO* (more than anything) is driving marketers to take up the ABM banner. Our research, trends studies, and customer interactions show that ABM continues as a popular topic among B2B marketers and sellers. But many claims hit an almost hysterical note: Do this now or be left behind!



Online dating profiles and social media accounts add to the rich data sources that allow criminals to tailor attacks.

US-CERT and Cupid don't often keep company, but this Valentine's Day is being marked by new threats to those seeking romance and new warnings from the federal cybersecurity group.

A notice from US-CERT points to an FTC blog post about how consumers can protect themselves from online scams involving dating sites, personal messaging systems, and the promise of romance and companionship from online strangers.

The general warning comes as specific scams are being exposed by online researchers. For example, researchers at Agari Data have followed a Nigeria-based group dubbed "Scarlet Widow" since 2017 as they exploited vulnerable populations, moving from romantic "attacks" against isolated farmers and individuals with disabilities to business email compromises that raised the financial stakes.



Thursday, 14 February 2019 15:34

Scammers Fall in Love with Valentine's Day

NEW YORK and SAN FRANCISCO — An authoritative legal-industry report on the current state of artificial intelligence (AI) in contract analysis and data extraction and its applications within the legal community was released today. Leading industry analyst firm Ari Kaplan Advisors was engaged by Seal Software to design and conduct unbiased research, the findings of which provide clarity on how legal departments at large corporations perceive and practically apply AI-driven contract analytics in a broad range of matters.

The report is derived from comprehensive interviews with professionals, predominately at Fortune 1000 organizations, whom exercise influence over the adoption and deployment of AI technology. Law department leaders from American Express (NYSE:AXP), Hewlett Packard Enterprise (NYSE:HPE), Nokia (NYSE:NOK), Novartis (NYSE:NVS), Atos (EURONEXT:ATO), Transocean Ltd. (NYSE:RIG), SI Group Inc., CyrusOne (NASDAQ:CONE), PagerDuty and Olympus Corporation of the Americas, among others, shared their views in the benchmarking study. All but one of the participants were lawyers, about two-thirds of which were with organizations that had more than $5 billion in revenue, and most worked at companies with more than 5,000 employees.

“It was a privilege to speak with so many industry leaders and I am proud to share their perspectives about the promise and practical application of this technology,” said Ari Kaplan, principal of Ari Kaplan Advisors. “I hope this report fuels a productive dialogue that drives the legal community forward.”



(TNS) - Bay County and the cities of Springfield and Callaway will begin their final passes of free Hurricane Michael debris removal on March 11.

Residents in the two cities and incorporated areas of the county are encouraged to have all debris on their curbs by March 10 to help with the pickup. The final wave of cleanup will last through mid-April, after which any debris will be removed at homeowners' expense, officials said.

"We've got to get this place cleaned up," said Philip Griffitts, chairman of the Bay County Commission. "We continue to see illegal dumping ... we've got to set a date now or we'll never get this done."

While Springfield and Callaway decided to partner with the county on their final debris passes, other cities in the area still have their own schedules. Property owners in other cities can contact their local governments for information on when debris collection will end there.



In today’s school environment, effective communication is a complex undertaking. The average public school in America has more than 500 students.  Meanwhile, colleges and universities can easily have upwards of tens of thousands of students. On top of that, the different members of a school community—students, faculty, staff members, and parents—tend to have wildly different communication preferences and behaviors.

Administrators need to quickly send school-wide notifications about weather delays and closings. Teachers need to send classroom updates to all of their students’ parents. Parents and students also need to communicate effectively with teachers and administrators. Whatever the case, regular, well-executed communication is vital in a school setting. But how can schools most effectively and efficiently communicate to keep everyone safe, informed, and up to date? The key lies in a modern mass notification system for schools.



All data belonging to US users-including backup copies-have been deleted in catastrophe, VMEmail says.


An unknown attacker appears to have deleted 18 years' worth of customer emails, along with all backup copies of the data, at email provider VFEmail.

A note on the firm's website Tuesday described the attack, first reported by KrebsOnSecurity, as causing "catastrophic destruction."

"This person has destroyed all data in the US, both primary and backup systems. We are working to recover what data we can," the note read. VFEmail was established in 2001 and provides free and paid email services, including bulk email services in the US and elsewhere.

The attack, described in a series of tweets from the firm, seems to have occurred on Monday and had targeted all VFEmail's externally facing servers across data centers. Though the servers were running different operating systems and not all shared the same authentication, the attacker managed to access each one and reformat them all the same.



Digital intelligence (DI) – the practice of understanding and optimizing digital customer engagements – has been around for as long as the internet itself. But it has not remained stagnant. The practices and technologies needed to support DI have continued to be revolutionized by the digital disruption. And the means by which customers interact with a brand have skyrocketed in recent years showing no signs of slowing down. In a recent press release, IHS Markit estimated the number of internet-connected devices will grow to 125 billion by 2030, up from around 27 billion in 2017.

Understanding, and optimizing digital customer engagement in today’s environment demands a dizzying combination of DI tech.  Forrester recently analyzed the DI market to make sense of it. We published our findings in The Forrester Tech Tide™: Digital Intelligence Technologies, Q1 2019.



(TNS) — The strongest and potentially wettest storm of the winter season is bearing down on Southern California this week, threatening to unleash debris flows in burn areas in Orange and Riverside counties as the region’s wild winter continues.

The atmospheric river-fueled storm, packed with subtropical moisture, will take aim at large swaths of the already-soaked state beginning early Wednesday and lasting through Thursday.

The amount of precipitation from the storm will vary depending on the region, with San Diego, Orange and Riverside counties likely to be pounded with up to 2 inches of rain along the coast and up to 10 inches at higher elevations. This could create a dangerous situation for residents in recent burn areas, according to the National Weather Service.

Forecasters predict the Holy fire burn scar will see 2.5 to 6 inches of rain, while the area affected by the Cranston fire last year will likely experience 3 to 8 inches of precipitation through Thursday. That has the potential to trigger debris flows and flooding, according to the weather service.



(TNS) — A stubbed toe, a scraped knee, a twisted ankle.

Call 911 in Pinellas County, Fla., about any of those injuries and at least four people in two vehicles will show up.

But a new proposal — already implemented in Hillsborough County and across the country — that's being considered by county government and some of the cities, including St. Petersburg, would reduce the response for certain minor medical issues.

The goal: "We preserve our resources for the most severe calls, and ultimately improve our response times on the most critical emergencies," said St. Petersburg Fire Rescue Division Chief Ian Womack to City Council on Thursday. "The general principle is, if you over-resource low-priority calls, that unit is then committed to the low priority call."



What does the term “digital transformation” mean to you?

Is it about digital customer experiences? Digital operations? Transforming business models? Leveraging software ecosystems? Is it a floor wax? A dessert topping?

Digital transformation (DT) as a term loses meaning when it involves everything under the sun. Over the past few years, we’ve seen companies label anything and everything as “digital transformation” — no wonder why DT initiatives meander and stall.

Companies that succeed with transformation initiatives keep a laser focus on using technology to deliver business results. How? Not with long cycles of business requirements and software implementations.



(TNS) - After the Camp fire destroyed their home in Paradise, Calif., last November, Anastasia Skinner, 26, her husband and their three young children left their community and moved in with a relative in Nevada. But the schools there didn’t offer two of her special-needs children the care they required.

So it was welcome news when Skinner, who was pregnant when she fled the fire, heard that officials were allowing residents to move back and live on their fire-scarred properties in temporary dwellings. With the insurance money they collected, Skinner and her husband purchased a used RV for $10,000 and headed back to Paradise.

For about a month, they made a home of their small RV. Money was tight, and they spent more on water, propane and gas than what they paid for the monthly mortgage on their now-destroyed house of six years. But, unlike thousands of others, at least the Skinners had a home.



TPRM in the Wake of GDPR

Cisco has just released a Data Privacy Benchmark Study, revealing that outsourcers are taking seriously their responsibility to protect customers’ data. Tom Garrubba, Senior Director and CISO at Shared Assessments, offers his perspective on third parties’ performance of late.

Those of us in the privacy profession knew it was only a matter of time until privacy-minded organizations would see the benefits of their internal analysis and hard work. Their efforts to refine and/or create policies, procedures, standards and practices that better secure and guard privacy during the handling of their customers’ personally identifiable information are paying off.

Evidence of this came to light in the new Cisco Data Privacy Benchmark Study, published in late January 2019. The study indicates both outsourcing organizations and service providers are modifying the way they are doing business. Organizations increasingly understand the importance of recent regulations such as the General Data Protection Regulation (GDPR), which mandates protections of the personal data for citizens throughout the EU. This understanding is gaining traction as organizations grapple with similar U.S.-state privacy regulations and guidance, such as the California Consumer Privacy Act (CCPA). From a compliance perspective, this is a breath of fresh air, since organizations are required to provide evidence they’ve documented (and thus have a handle on) their internal processes and all the hands through which their data passes.



Catastrophes can take many forms ‒ from an active shooter to a chemical hazard or natural disaster ‒ and businesses must always have emergency response plans ready for those situations.

Authorities will be dispatched to your workplace as quickly as possible in the event of an emergency. Your emergency preparedness plan must be designed to help employees quickly respond in order to save lives and avoid further injury.

Here are how organizations should approach three of the most common emergencies:



Recently, the United States experienced a once-in-a-lifetime weather event when temperatures dropped drastically to record lows.

The National Weather Service in Chicago predicted it would be the chilliest Arctic outbreak since records have been kept. Biting winds caused the wind chill to hit life-threatening lows. In Thief River Falls, the AccuWeather RealFeel® Temperature was -77° F!

The real threat of the polar vortex was felt in the workplace as work days were canceled, employees called in, and the post office shut down. “The words “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds” – but the frigid temperatures did, and as a result, delayed shipping for two days.

Severe weather is becoming a more common risk to businesses worldwide. Houston, Texas has had three 500-year floods in the last five years. The California wildfires affected the agriculture industry, impacting wine, fruits, nuts, livestock, and poultry production. The 2011 tsunami and subsequent nuclear event in Japan caused a suspension of production of Toyota, Nissan, Honda, Mitsubishi, and Suzuki.



Monday, 11 February 2019 16:00

What’s your severe weather risk?

If you are reading this for the snark and jokes, thank you. We are so sorry to disappoint you, because we’re not sure how to make corporate climate risks funny. Instead, let’s have a sober discussion about climate risk, and you can leave the jokes in the comment section.

What Is It?

The Green New Deal is an ambitious proposal in the US to combat climate change. Named after President Franklin D. Roosevelt’s New Deal to combat the Great Depression, the Green New Deal is a massive stimulus package aimed to address climate change, as well as the rising social, economic, and political inequality in the US that comes with it. It calls for economic mobilization not seen since World War II and the New Deal and aims to cut greenhouse gas emissions (GHGs) in half by 2030, shift 100 percent of national power generation to renewable sources, upgrade all infrastructure and transportation for energy efficiency, decarbonize the largest polluting industries (manufacturing and agriculture), fund the capture of GHGs, and virtually eliminate poverty in the US by including everyone in the prosperity that this transition would provide.

Although it’s unlikely to pass through the US Senate this time around (never mind the President’s desk), we believe that businesses must adopt the goals of the deal to avoid going extinct.



Ugh. Everyone is talking about the citizen data scientist, but no one can define it (perhaps they know one when they see one). Here goes — the simplest definition of a citizen data scientist is: non-data scientist. That’s not a pejorative; it just means that citizen data scientists nobly desire to do data science but are not formally schooled in all the ins and outs of the data science life cycle. For example, a citizen data scientist may be quite savvy about what enterprise data is likely to be important to create a model but may not know the difference between GBM, random forester, and SVM. Those algorithms are data scientist geek-speak to many of them. The citizen data scientist’s job is not data science; rather, they use it as a tool to get their job done. Here is my definition of the enterprise citizen data scientist:

A businessperson who aspires to use data science techniques such as machine learning to discover new insights and create predictive models to improve business outcomes.



Monday, 11 February 2019 15:31

Who Are You, Citizen Data Scientist?

Weather tools help Team Rubicon respond quicker and reduce risks

By Glen Denny, President, Enterprise Solutions, Baron Critical Weather Solutions

Team Rubicon is an international disaster response nonprofit with a mission of using the skills and experiences of military veterans and first responders to rapidly provide relief to communities in need. Headquartered in Los Angeles, California, Team Rubicon has more than 80,000 volunteers around the country ready to jump into action when needed to provide immediate relief to those affected by natural disasters.

More than 80 percent of the disasters Team Rubicon responds to are weather-related, including crippling winter storms, catastrophic hurricanes, and severe weather outbreaks – like tornadoes. While always ready to serve, the organization needed better weather intelligence to help them prepare and mitigate risks. After adopting professional weather forecasting and monitoring tools, operations teams were able to pinpoint weather hazards, track storms, view forecasts, and set up custom alerts. And the intelligence they gained made a huge difference in the organization’s response to Hurricanes Florence and Michael.

Team Rubicon relies on skills and experiences of military veterans and first responders

About 75 percent of Team Rubicon volunteers are military veterans, who find that their skills in emergency medicine, small-unit leadership, and logistics are a great fit with disaster response. It also helps with their ability to hunker down in challenging environments to get the job done. A further 20 percent of volunteers are trained first responders, while the rest are volunteers from all walks of life. The group is a member of National Voluntary Organizations Active in Disaster (National VOAD), an association of organizations that mitigate and alleviate the impact of disasters.

By focusing on underserved or economically-challenged communities, Team Rubicon seeks to make the largest impact possible. According to William (“TJ”) Porter, manager of operational planning, Team Rubicon’s core mission is to help those who are often forgotten or left behind; they place a special emphasis on helping under-insured and uninsured populations.

Porter, a 13-year Air Force veteran, law enforcement officer, world traveler, and former American Red Cross worker, proudly stands by Team Rubicon’s service principles, “Our actions are characterized by the constant pursuit to prevent or alleviate human suffering and restore human dignity – we help people on their worst day.”

Weather-related disasters pose special challenges

The help Team Rubicon provides for weather-related disasters spans the gamut, from removing trees from roadways, clearing paths for service vehicles, bringing in supplies, conducting search and rescue missions (including boating rescues), dealing with flooded out homes, mucking out after a flood, mold remediation, and just about anything else needed. While Team Rubicon had greatly expanded its equipment inventory in recent years to help do these tasks, the organization lacked the deep level of weather intelligence that could help them understand and mitigate risks – and keep their teams safe from danger.

That’s where Baron comes into the story. After learning of the impressive work Team Rubicon is doing at the Virginia Emergency Management Conference, a Baron team member struck up a conversation with Team Rubicon, asking if they had a need for detailed and accurate weather data to help them plan their efforts. Team Rubicon jumped at the opportunity and Baron ultimately donated access to its Baron Threat Net product. Key features allow users to pinpoint weather hazards by location, track storms, view forecasts and set up custom alerts, including location-based pinpoint alerting and standard alerts from the National Weather Service (NWS). The web portal weather monitoring system provides street level views and the ability to layer numerous data products. Threat Net also offers a mobile companion application that gives Team Rubicon access to real-time weather monitoring on the go.

This suited Team Rubicon down to the ground. “In years past, we didn’t have a good way to monitor weather,” explains Porter. “We went onto the NWS, but our folks are not meteorologists, and they don’t have that background to make crucial decisions. Baron Threat Net helped us understand risks and mitigate the risks of serious events. It plays a crucial role in getting teams in as quickly as possible so we can help the greatest number of people.”

New weather tools help with response to major hurricanes

Baron1The new weather intelligence tools have already had a huge impact on Team Rubicon’s operations. Take the example of how access to weather data helped Team Rubicon with its massive response to Hurricane Florence. A day or so before the hurricane was due to make landfall, Dan Gallagher, Enterprise Product Manager and meteorologist at Baron Services, received a call from Team Rubicon, requesting product and meteorological support. Individual staff had been using the new Baron Threat Net weather tools to a degree since gaining access to them, but the operations team wanted more training and support in the face of what looked like a major disaster barreling towards North Carolina, South Carolina, Virginia, and West Virginia.

Gallagher, a trained meteorologist with more than 18 years of experience in meteorological research and software development, quickly hopped on a plane, arriving at Team Rubicon’s National Operations Center in Dallas. His first task was to meet operational manager Porter’s request to help them guide reconnaissance teams entering the area. They wanted to place a reconnaissance team close to the storm – but not in mortal danger. Using the weather tools, Gallagher located a spot north of Wilmington, NC between the hurricane’s eyewall and outer rain bands that could serve as a safe spot for reconnaissance.

The next morning, Gallagher provided a weather briefing to ensure that operations staff had the latest weather intelligence. “I briefed them on where the storm was, where it was heading, the dangers that could be anticipated, areas likely to be most affected, and the hazards in these areas.”

Throughout the day, Gallagher conducted a number of briefings and kept the teams up to date as Hurricane Florence slowly moved overland. He also provided video weather briefings for the reconnaissance team in their car en route to their destination.

Another crew based in Charlotte was planning the safest route for trucking in supplies based on weather conditions. They wanted help in choosing whether to haul the trailer from Atlanta, GA or Alexandria, VA. “I was not there to make a recommendation on an action but rather to give them the weather information they need to make their decision,” explains Gallagher. “As a meteorologist, I know what the weather is, but they decide how it impacts their operation. As soon as I gave a weather update they could make a decision within seconds, making it possible for actions based on that decision.” Team Rubicon used the information Gallagher provided to select the Alexandria VA route; their crackerjack logistics team was then able to quickly make all the needed logistical arrangements.

In addition to weather briefings, Gallagher provided more detailed product training on Baron Threat Net, observed how the teams actually use the product, and learned how the real-time products were performing. He also got great feedback on other data products that might enhance Team Rubicon’s ability to respond to disasters.

Team Rubicon gave very high marks to the high-resolution weather/forecast model available in Baron Threat Net. They relied upon the predictive precipitation accumulation and wind speed information, as well as information on total precipitation accumulation (what has already fallen in the past 24 hours).

The wind damage product showing shear rate was very useful to Team Rubicon. In addition, the product did an excellent job of detecting rotation, including picking out the weak tornadoes spawned from the hurricane that were present in the outer rain bands of Hurricane Florence. These are typically very difficult to identify and warn people about, because they spin up quickly and are relatively shallow and weak (with tornado damage of EF0 or EF1 as measured on the Enhanced Fujita Scale). Gallagher had seen how well the wind damage product performed in larger tornado cases but was particularly gratified at how well it helped the team detect these smaller ones.

For example, Lauren Vatier of Team Rubicon’s National Incident Management Teamcommented that she had worked with Baron Threat Net before the Florence event, but using it so intensively made her more familiar with how to use the product and really helped cement her knowledge. “Before Florence I had not used Baron Threat Net for intel purposes. Today I am looking for information on rain accumulation and wind, and I’m looking ahead to help the team understand what the situation will look like in the future. It helps me understand and verify the actual information happening with the storm. I don’t like relying on news articles. Now I can look into the product and get accurate and reliable information.”

Vatier also really likes the ability to pinpoint information on a map showing colors and ranges. “You can click on a point and tell how much accumulation has occurred or what the wind speed is. The pinpointing is a valuable part of Baron Threat Net.” The patented Baron Pinpoint Alerting technology automatically sends notifications any time impactful weather approaches; alert types include severe storms and tornadoes; proximity alerts for approaching lightning, hail, snow and rain; and National Weather Service warnings. She concludes, “I feel empowered by the program. It ups my confidence in my ability to provide accurate information.”

Baron2TJ Porter concurred that Baron Threat Net helped Team Rubicon mobilize the large teams that deployed for Hurricane Florence. “It is crucial to put people on the ground and make sure they’re safe. Baron Threat Net helps us respond quicker to disasters. It also helps the strike teams ensure they are not caught up in other secondary or rapid onset weather events.”

Porter explains that the situation unit leaders actively monitor weather through the day using Baron Threat Net. “We are giving them all the tools at our disposal, because these are the folks who provide early warnings to keep our folks safe.”

Future-proofing weather data

Being on the ground with Team Rubicon during the Hurricane Florence disaster recovery response gave Baron’s Gallagher an unusual opportunity to discuss other ways Baron weather products could help respond to weather-related disasters. According to Porter, “We are looking to Baron to help us understand secondary events, like the extensive flooding resulting from Hurricane Florence, and to understand where these hazards are today, tomorrow, and the next day.”

In addition, Team Rubicon is committed to targeting those areas of greatest need, so they want to be able to layer weather information with other data sets, especially social vulnerability, including location of areas with uninsured or underinsured populations. Says Porter, “Getting into areas we know need help will shave minutes, hours, or even days off how long it takes to be there helping”.

In the storm’s aftermath

At the time this article was written, hundreds of Team Rubicon volunteers were deployed as part of Hurricane Florence response operations and later in response to Hurricane Michael. Their work has garnered them a tremendous amount of national appreciation, including a spotlight appearance during Game 1 of the World Series. T-Mobile used its commercial television spots to support the organization, also pledging to donate $5,000 per post-season home run plus $1 per Twitter or Instagram post using #HR4HR to Team Rubicon.

Baron’s Gallagher appreciated the opportunity to see in real time how customers use its products, saying “The experience helped me frame improvements we can develop that will positively affect our clients using Baron Threat Net.”

A quality business continuity management (BCM) program is made up of six separate plans covering everything from emergency response to IT disaster recovery. In today’s post, we’ll explain what the six plans are and share some tips to help your organization devise them.

Somebody once said that, “A goal without a plan is just a wish.” A less known variation of the quote (much less known) is, “A goal with six plans is a BCM program.”

These six plans are the ones you need to be able to respond, recover, and return to normal operations after a business disruption. What are the six? The answer is coming up.

Before we begin, our title says every BCM program should have these plans. There are a couple exceptions as I’ll go into below.

Here are the six plans, in order of importance:



(TNS) - The full scope of a project aimed to prevent roughly 200 Highland properties from being included in a flood map was presented to the city council this week.

The Federal Emergency Management Agency (FEMA) periodically updates its flood maps to take into account new developments, changes in topography, etc., and how those changes may put surrounding areas at a greater risk for flooding.

In 2017, Highland officials began studying potential problem areas when on preliminary flood maps FEMA released to replace those created in 1986.

Those drafts tripled the 1986 floodplains, increasing the number of “high-risk” parcels from 135 to 365 and showed flood elevation upstream of the CSX railroad of about 6 and a half feet, adding roughly 100 acres to the floodplain.



(TNS) - When there's a potential tornado threat to Dallas-Fort Worth, outdoor sirens let residents know, but not every Texas city operates the same warning system.

The city of Dallas has 162 sirens to warn residents about an imminent weather emergency. Fort Worth has 153. But Austin, Houston and San Antonio have gone a different route.

Austin Miller was born in downtown Dallas and grew up in Richardson and Garland, before moving to Houston. While living in the Houston area — first in Sugar Land and later in Cypress — Miller noticed the absence of the sirens he had grown used to in North Texas.



Organizations may be tempted to dismiss artificial intelligence as something which is currently out of their reach, but Thorsten Kurpjuhn says that this is definitely not the case. In fact, AI can help businesses of all sizes to ensure network uptime and protection.

Business reliance on IT has grown exponentially over the past few years. Not only has this put a strain on existing IT network set-ups but has seen the role and expectations of the network administrator change beyond all recognition, in a bid to keep everything running smoothly and securely.

There was a time when those in charge of the network knew where they stood and had the time and resource to deal with reliability and unexpected security issues – which were a less frequent occurrence. But in a world where technology underpins every activity and transaction, there is now a need to spin multiple, moving plates to ensure operational efficiency.

Managing hybrid cloud networks, reacting to the overwhelming amount of big data residing on the network, the growing number of connected mobile devices all wanting to access the WiFi, and the ever-increasing risk and prevalence of cyber threats are now the order of the day, making network monitoring a very different beast.



(TNS) - Law enforcement officials on the East End in New York are investigating a spate of prank emergency calls that have triggered heavy police responses and are forcing officials to do more to authenticate time-consuming and potentially dangerous incidents.  

The bogus calls, known as swatting because they often draw a police department’s SWAT team and other first responders, have targeted high-profile people such as celebrities, as well as those in the gaming community, a law enforcement source said.

“Generally they involve a report of a murder, or a kidnapping or both,” said Southampton Town Police Chief Steven Skrynecki. “These events trigger a significant and serious response.”

They can also be costly and dangerous.



(TNS) - Have you ever seen an emergency alert on your phone or heard a radio program interrupted by a harsh tone followed by a warning?

Here’s what you need to know about emergency alerts and the authorities behind them:

What are emergency alerts?

Whenever there’s a serious emergency affecting a large group of people, it can be important to deliver information swiftly and through reliable channels.

In 2006, then-President George W. Bush signed an executive order to set up an “effective, reliable, integrated, flexible and comprehensive system” to alert and warn the American people in situations of war, terrorist attack, natural disaster or other hazards to public safety.

Under that order, the Federal Emergency Management Agency created something called the Integrated Public Alert & Warning System, which is now used by government and emergency agencies across the United States to communicate with the American people in times of trouble. IPAWS can be used to deliver many different kinds of emergency alerts, including Amber Alerts, severe weather warnings and messages like the kerosene alert sent in Baltimore County on Wednesday.



NORTHPORT, N.Y. – Cybersecurity Ventures is excited to release this special first annual edition of the Cybersecurity Almanac, a handbook containing the most pertinent statistics and information for tracking cybercrime and the cybersecurity market.

Cisco’s commitment to security and partnerships starts at the top, and it’s one of the reasons why we’re collaborating with them. “At Cisco, security is foundational to everything we do,” said Chuck Robbins, Chairman and CEO. Last year Cisco blocked seven trillion threats, or 20 billion threats a day, on behalf of their customers, according to Robbins.

Cisco and Cybersecurity Ventures have compiled 100 of the most important facts, figures, statistics, and predictions to help frame the global cybercrime landscape, and what the cybersecurity industry is doing to help protect governments, citizens, and organizations globally.

Cybersecurity Ventures formulates our own ground-up research — plus we vet, synthesize and repurpose research from the most credible sources (analysts, researchers, associations, vendors, industry experts, media publishers) — to provide our readers with a bird’s-eye view of the most dangerous cyber threats, and the most important solutions. 



By Alex Winokur, founder of Axxana


Disaster recovery is now on the list of top concerns of every CIO. In this article we review the evolution of the disaster recovery landscape, from its inception until today. We look at the current understanding of disaster behavior and as a result the disaster recovery processes. We also try to cautiously anticipate the future, outlining the main challenges associated with disaster recovery.

The Past

The computer industry is relatively young. The first commercial computers appeared somewhere in the 1950s—not even seventy years ago. The history of disaster recovery (DR) is even younger. Table 1 outlines the appearance of the various technologies necessary to construct a modern DR solution.


Table 1 – Early history of DR technology development


From Magnetic Tapes to Data Networks

The first magnetic tapes for computers were used as input/output devices. That is, input was punched onto punch cards that were then stored offline to magnetic tapes. Later, UNIVAC I, one of the first commercial computers, was able to read these tapes and process their data. Later still, output was similarly directed to magnetic tapes that were connected offline to printers for printing purposes. Tapes began to be used as a backup medium only after 1954, with the


Figure 1: First Storage System - RAMAC

Although modern wide-area communication networks date back to 1974, data has been transmitted over long-distance communication lines since 1837 via telegraphy systems. These telegraphy communications have since evolved to data transmission over telephone lines using modems.
introduction of the mass storage device (RAMAC).

Modems were massively introduced in 1958 to connect United States air defense systems; however, their throughput was very low compared to what we have today. The FAA clustered system deployed communication that was designed for computers to communicate with their peripherals (e.g., tapes). Local area networks (LANs) as we now know them had not been invented yet.

Early Attempts at Disaster Recovery

It wasn’t until the 1970s that concerns about disaster recovery started to emerge. In that decade, the deployment of IBM 360 computers reached a critical mass, and they became a vital part of almost every organization. Until the mid-1970s, the perception was that if a computer failed, it would be possible to fail back to paper-based operation as was done in the 1960s. However, the wide-spread rise of digital technologies in the 1970s led to a corresponding increase in technological failures on one hand; while on the other hand, theoretical calculations, backed by real-world evidence, showed that switching back to paper-based work was not practical.

The emergence of terrorist groups in Europe like the Red Brigades in Italy and the Baader-Meinhof Group in Germany further escalated concerns about the disruption of computer operations. These left-wing organizations specifically targeted financial institutions. The fear was that one of them would try to blow up a bank’s data centers.

At that time, communication networks were in their infancy, and replication between data centers was not practical.

Parallel workloads. IBM came up with the idea to use the FAA clustering technology to build two adjoining computer rooms that were separated by a steel wall and had one node cluster in each room. The idea was to run the same workload twice and to be able to immediately fail over from one system to the other in case one system was attacked. A closer analysis revealed that in a case of a terror attack, the only surviving object would be the steel wall, so the plan was abandoned.

Hot, warm, and cold sites. The inability of computer vendors (IBM was the main vendor at the time) to provide an adequate DR solution made way for dedicated DR firms like SunGard to provide hot, warm, or cold alternate site. Hot sites, for example, were duplicates of the primary site; they independently ran the same workloads as the primary site, as communication between the two sites was not available at the time. Cold sites served as repositories for backup tapes. Following a disaster at the primary site, operations would resume at the cold site by allocating equipment, executing a restore from backup operations, and restarting the applications. Warms sites were a compromise between a hot site and a cold site. These sites had hardware and connectivity already established; however, recovery was still done by restoring the data from backups before the applications could be restarted.

Backups and high availability. The major advances in the 1980s were around backups and high availability. On the backup side, regulations requiring banks to have a testable backup plan were enacted. These were probably the first DR regulations to be imposed on banks; many more followed through the years. On the high availability side, Digital Equipment Corporation (DEC) made the most significant advances in LAN communications (DECnet) and clustering (VAXcluster).

The Turning Point

On February 26, 1993 the first bombing of the World Trade Center (WTC) took place. This was probably the most significant event shaping the disaster recovery solution architectures of today. People realized that the existing disaster recovery solutions, which were mainly based on tape backups, were not sufficient. They understood that too much data would be lost in a real disaster event.

SRDF. By this time, communication networks had matured, and EMC became the first to introduce a storage-to-storage replication software called Symmetrix Remote Data Facility (SRDF).


Behind the Scenes at IBM

At the beginning of the nineties, I was with IBM’s research division. At the time, we were busy developing a very innovative solution to shorten the backup window, as backups were the foundation for all DR and the existing backup windows (dead hours during the night) started to be insufficient to complete the daily backup. The solution, called concurrent copy, was the ancestor of all snapshotting technologies, and it was the first intelligent function running within the storage subsystem. The WTC event in 1993 left IBM fighting the “yesterday battles” of developing a backup solution, while giving EMC the opportunity to introduce storage-based replication and become the leader in the storage industry.


The first few years of the 21st century will always be remembered for the events of September 11, 2001—the date of the complete annihilation of the World Trade Center. Government, industry, and technology leaders realized then that some disasters can affect the whole nation, and therefore DR had to be taken much more seriously. In particular, the attack demonstrated that existing DR plans were not adequate to cope with disasters of such magnitude. The notion of local, regional, and nationwide disasters crystalized, and it was realized that recovery methods that work for local disasters don’t necessarily work for regional ones.

SEC directives. In response, the Securities Exchange Commission (SEC) issued a set of very specific directives in the form of the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S.” These regulations, still intact today, bind all financial institutions. The DR practices that were codified in the SEC regulations quickly propagated to other sectors, and disaster recovery became a major area of activity for all organizations relying on IT infrastructure.

The essence of these regulations is as follows:

  1. The economic stance of the United States cannot be compromised under any circumstance.
  2. Relevant financial institutions are obliged to correctly, without any data loss, resume operations by the next business day following a disaster.
  3. Alternate disaster recovery sites must use different physical infrastructure (electricity, communication, water, transportation, and so on) than the primary site.

Note that Requirements 2 and 3 above are somewhat contradictory. Requirement 2 necessitates synchronous replication to facilitate zero data loss, while Requirement 3 basically dictates long distances between sites—thereby making the use of synchronous replication impossible. This contradiction is not addressed within the regulations and is left to each implementer to deal with at its own discretion.

The secret to resolving this contradiction lies in the ability to reconstruct missing data if or when data loss occurs. The nature of most critical data is such that there is always at least one other instance of this data somewhere in the universe. The trick is to locate it, determine how much of it is missing in the database, and augment the surviving instance of the database with this data. This process is called data reconciliation, and it has become a critical component of modern disaster recovery. [See The Data Reconciliation Process sidebar.]


The Data Reconciliation Process

If data is lost as a result of a disaster, the database becomes misaligned with the real world. The longer this misalignment exists, the greater the risk of application inconsistencies and operational disruptions. Therefore, following a disaster, it is very important to align back the databases with the real world as soon as possible. This process of alignment is called data reconciliation.

The reconciliation process has two important characteristics:

  1. It is based on the fact that the data lost in a disaster exists somewhere in the real word, and thus it can be reconstructed in the database.
  2. The duration and complexity of the reconciliation is proportional to the recovery point objective (RPO); that is, it’s proportional to the amount of data lost.

One of the most common misconceptions in disaster recovery is that RPO (for example, RPO = 5) refers to how many minutes of data the organization is willing to lose. What RPO really means is that the organization must be able to reconstruct and reconsolidate (i.e., reconcile) that last five minutes of missing data. Note that the higher the RPO (and therefore, the greater the data loss), the longer the RTO and the costlier the reconciliation process. Catastrophes typically occur when RPO is compromised and the reconciliation process takes much longer.

In most cases, the reconciliation process is quite complicated, consisting of time-consuming processes to identify the data gaps and then resubmitting the missing transactions to realign the databases with real-world status. This is a costly, mainly manual, error-prone process that greatly prolongs the recovery time of the systems and magnifies risks associated with downtime.


The Present

The second decade of the 21st century has been characterized by new types of disaster threats, including sophisticated cyberattacks and extreme weather hazards caused by global warming. It is also characterized by new DR paradigms, like DR automation, disaster recovery as a service (DRaaS), and active-active configurations.

These new technologies are for the most part still in their infancy. DR automation tools attempt to orchestrate a complete site recovery through invocation of one “site failover” command, but they are still very limited in scope. A typical tool in this category is the VMware Site Recovery Manager (SRM). DRaaS attempts to reduce the cost of DR-compliant installation by locating the secondary site in the cloud. The new active-active configurations try to reduce equipment costs and recovery time by utilizing techniques that are used in the context of high availability; that is, to recover from a component failure rather than a complete site failure.

Disasters vs. Catastrophes

The following definitions of disasters and disaster recovery have been refined over the years to make a clear distinction between the two main aspects of business continuity: high availability protection and disaster recovery. This distinction is important because it crystalizes the difference between disaster recovery and a single component failure recovery covered by highly available configurations, and in doing so also accounts for the limitations of using active-active solutions for DR.

A disaster in the context of IT is either a significant adverse event that causes an inability to continue operation of the data center or a data loss event where recovery cannot be based on equipment at the data center. In essence, disaster recovery is a set of procedures aimed to resume operations following a disaster by failing over to a secondary site.

From a DR procedures perspective, it is customary to classify disasters into 1) regional disasters like weather hazards, earthquakes, floods, and electricity blackouts and 2) local disasters like local fires, onsite electrical failures, and cooling system failures.

Over the years, I have also noticed a third, independent classification of disasters. Disasters can also be classified as catastrophes. In principal, a catastrophe is a disastrous event where in the course of a disaster, something very unexpected happens that causes the disaster recovery plans to dramatically miss their service level agreement (SLA); that is, they typically exceed their recovery time objective (RTO).

When DR procedures go as planned for regional and local disasters, organizations fail over to a secondary site and resume operations within pre-determined parameters for recovery time (i.e., RTO) and data loss (i.e., RPO). The organization’s SLAs, business continuity plans, and risk management goals align with these objectives, and the organization is prepared to accept the consequent outcomes. A catastrophe occurs when these SLAs are compromised.

Catastrophes can also result from simply failing to execute the DR procedures as specified, typically due to human errors. However, for the sake of this article, let’s be optimistic and assume that DR plans are always executed flawlessly. We shall concentrate only on unexpected events that are beyond human control.

Most of the disaster events that have been reported in the news recently (for example, the Amazon Prime Day outage in July 2018 and the British Airways bank holiday outage in 2017) have been catastrophes related to local disasters. If DR could have been properly applied to the disruptions at hand, nobody would have noticed that there had been a problem, as the DR procedures were designed to provide almost zero recovery time and hence zero down time.

The following two examples provide a closer look at how catastrophes occur.

9/11 – Following the September 11 attack, several banks experienced major outages. Most of them had a fully equipped alternate site in Jersey City—no more than five miles away from their primary site. However, the failover failed miserably because the banks’ DR plans called for critical personnel to travel from their primary site to their alternate site, but nobody could get out of Manhattan.

A data center power failure during a major snow storm in New England – Under normal DR operations at this organization, the data was synchronously replicated to an alternate site. However, 90 seconds prior to a power failure at the primary site, the central communication switch in the area lost power too, which cut all WAN communications. As a result, the primary site continued to produce data for 90 seconds without replication to the secondary site; that is, until it experienced the power failure. When it finally failed over to the alternate site, 90 seconds of transactions were missing; and because the DR procedures were not designed to address recovery where data loss has occurred, the organization experienced catastrophic down time.

The common theme of these two examples is that in addition to the disaster at the data center there was some additional—unrelated—malfunction that turned a “normal” disaster into a catastrophe. In the first case, it was a transportation failure; in the second case, it was a central switch failure. Interestingly, both failures occurred to infrastructure elements that were completely outside the control of the organizations that experienced the catastrophe. Failure of the surrounding infrastructure is indeed one of the major causes for catastrophes. This is also the reason why the SEC regulations put so much emphasis on infrastructure separation between the primary and secondary data center.

Current DR Configurations

In this section, I’ve included examples of two traditional DR configurations that separate the primary and secondary center, as stipulated by the SEC. These configurations have predominated in the past decade or so, but they cannot ensure zero data loss in rolling disasters and other disaster scenarios, and they are being challenged by new paradigms such as that introduced by Axxana’s Phoenix. While a detailed discussion would be outside the scope of this article, suffice it to say that Axxana’s Phoenix makes it possible to avoid catastrophes such as those just described—something that is not possible with traditional synchronous replication models.


Figure 2 – Typical DR configuration


Typical DR configuration. Figure 2 presents a typical disaster recovery configuration. It consists of a primary site, a remote site, and another set of equipment at the primary site, which serves as a local standby.

The main goal of the local standby installation is to provide redundancy to the production equipment at the primary site. The standby equipment is designed to provide nearly seamless failover capabilities in case of an equipment failure—not in a disaster scenario. The remote site is typically located at a distance that guarantees infrastructure independence (communication, power, water, transportation, etc.) to minimize the chances of a catastrophe. It should be noted that the typical DR configuration is very wasteful. Essentially, an organization has to triple the cost of equipment and software licenses—not to mention the increased personnel costs and the cost of high-bandwidth communications—to support the configuration of Figure 2.


Figure 3 – DR cost-saving configuration


Traditional ideal DR configuration. Figure 3 illustrates the traditional ideal DR configuration. Here, the remote site serves both for DR purposes and high availability purposes. Such configurations are sometimes realized in the form of extended clusters like Oracle RAC One Node on Extended Distance. Although traditionally considered the ideal, they are a trade-off between survivability, performance, and cost. The organization saves on the cost of one set of equipment and licenses, but it compromises survivability and performance. That’s because the two sites have to be in close proximity to share the same infrastructure, so they are more likely to both be affected by the same regional disasters; at the same time, performance is compromised due to the increased latency caused by separating the two cluster nodes from each other.


Figure 4 – Consolidation of DR and high availability configurations with Axxana’s Phoenix

True zero-data-loss configuration. Figure 4 represents a cost-saving solution with Axxana’s Phoenix. In case of a disaster, Axxana’s Phoenix provides a zero-data-loss recovery to any distance. So, with the help of Oracle’s high availability support (fast start failover and transparent application failover), Phoenix provides functionality very similar to extended cluster functionality. With Phoenix, however, it can be implemented over much longer distances and with much smaller latency, providing true cost savings over the typical configuration shown in Figure 3.

The Future

In my view, the future is going to be a constant race between new threats and new disaster recovery technologies.

New Threats and Challenges

In terms of threats, global warming creates new weather hazards that are fiercer, more frequent, and far more damaging than in the past—and in areas that have not previously experienced such events. Terror attacks are on the rise, thereby increasing threats to national infrastructures (potential regional disasters). Cyberattacks—in particular ransomware, which destroys data—are a new type of disaster. They are becoming more prolific, more sophisticated and targeted, and more damaging.

At the same time, data center operations are becoming more and more complex. Data is growing exponentially. Instead of getting simpler and more robust, infrastructures are getting more diversified and fragmented. In addition to legacy architectures that aren’t likely to be replaced for a number of years to come, new paradigms like public, hybrid, and private clouds; hyperconverged systems; and software-defined storage are being introduced. Adding to that are an increasing scarcity of qualified IT workers and economic pressures that limit IT spending. All combined, these factors contribute to data center vulnerabilities and to more frequent events requiring disaster recovery.

So, this is on the threat side. What is there for us on the technology side?

New Technologies

Of course, Axxana’s Phoenix is at the forefront of new technologies that guarantee zero data loss in any DR configuration (and therefore ensure rapid recovery), but I will leave the details of our solution to a different discussion.

AI and machine learning. Apart from Axxana’s Phoenix, the most promising technologies on the horizon revolve around artificial intelligence (AI) and machine learning. These technologies enable DR processes to become more “intelligent,” efficient, and predictive by using data from DR tests, real-world DR operations, and past disaster scenarios; in doing so, disaster recovery processes can be designed to better anticipate and respond to unexpected catastrophic events.These technologies, if correctly applied, can shorten RTO and significantly increase the success rate of disaster recovery operations. The following examples suggest only a few of their potential applications in various phases of disaster recovery:

  • They can be applied to improve the DR planning stage, resulting in more robust DR procedures.
  • When a disaster occurs, they can assist in the assessment phase to provide faster and better decision-making regarding failover operations.
  • They can significantly improve the failover process itself, monitoring its progress and automatically invoking corrective actions if something goes wrong.

When these technologies mature, the entire DR cycle from planning to execution can be fully automated. They carry the promise of much better outcomes than processes done by humans because they can process and better “comprehend” far more data in very complex environments with hundreds of components and thousands of different failure sequences and disaster scenarios.

New models of protection against cyberattacks. The second front where technology can greatly help with disaster recovery is on the cyberattack front. Right now, organizations are spending millions of dollars on various intrusion prevention, intrusion detection, and asset protection tools. The evolution should be from protecting individual organizations to protecting the global network. Instead of fragmented, per-organization defense measures, the global communication network should be “cleaned” of threats that can create data center disasters. So, for example, phishing attacks that would compromise a data center’s access control mechanisms should be filtered out in the network—or in the cloud— instead of reaching and being filtered at the end points.


Disaster recovery has come a long way—from naive tape backup operations to complex site recovery operations and data reconciliation techniques. The expenses associated with disaster protection don’t seem to go down over the years; on the contrary, they are only increasing.

The major challenge of DR readiness is in its return on investment (ROI) model. On one hand, a traditional zero-data-loss DR configuration requires organizations to implement and manage not only a primary site, but also a local standby and remote standby; doing so essentially triples the costs of critical infrastructure, even though only one third of it (the primary site) is utilized in normal operation.

On the other hand, if a disaster occurs and the proper measures are not in place, the financial losses, reputation damage, regulatory backlash, and other risks can be devastating. As organizations move into the future, they will need to address the increasing volumes and criticality of data. The right disaster recovery solution will no longer be an option; it will be essential for mitigating risk, and ultimately, for staying in business.

Thursday, 07 February 2019 18:15

Disaster Recovery: Past, Present, and Future

(TNS) - The Georgia Emergency Management and Homeland Security Agency encourages Georgians to become proactive about preparing for severe weather by participating in Severe Weather Preparedness Week (Feb. 4-8).

“This state has an unpredictable history when comes to severe weather,” said GEMA/HS Director Homer Bryson. “Whether it’s hurricanes, tornadoes or severe thunderstorms, Georgians need to be sure of one thing … that they’re prepared for any disaster. During Severe Weather Preparedness Week, we’re dedicated to educating our citizens on how to better prepare for sudden weather events.”

Spring (March, April, May) is typically the time where the threat of tornadoes, damaging winds, large hail and frequent lightning from severe storms is at its highest across Georgia. Take advantage of Severe Weather Preparedness Week to review your family's emergency procedures and prepare for weather-related hazards.



RPA is software that mimics the activity of a human being in carrying out tasks within a business process and thereby frees human capital to be utilized in other areas. The software bots are programmed to do manual tasks and are relatively lightweight in that they reside on top of existing systems and applications. Recent surveys indicate that anywhere from 30-50% of RPA projects fail. Ever wondered why there are so many instances of companies not making it past the initial stages of their RPA initiative? The lack of a consistent process to identify the right automation opportunities and prioritize them, inevitably results in organizations fumbling early in their RPA journey, and in some cases giving up on it altogether. Identifying and prioritizing candidates for automation are critical steps before one can pilot RPA and build the business case to move forward.

Figure 1 outlines the four-step approach we recommend to begin the RPA journey, each of which needs to involve engaging the right stakeholders who not only have the authority to take decisions, but also have sufficient insights regarding the process areas under consideration.



The ability of an organization to continue operating during a disruption has never been more important. So it’s no surprise that ISO 22301, the internationally recognized standard for a business continuity management system (BCMS), is being updated to make sure it remains relevant to today’s business environment.

As the first ISO standard based on the High Level Structure (HLS), it has a strong foundation that now aligns with many other internationally recognized management system standards such as ISO 9001 quality management and ISO/IEC 27001 information security management. However there are areas of improvement highlighted by users, particularly around less prescriptive procedures and updated terms and definitions, that need considering to ensure it remains relevant in a changing business landscape.



(TNS) - Some three months after Tropical Storm Michael caused damage in North Carolina, the Federal Emergency Management Agency has declared 21 counties eligible for federal aid.

Michael — which made landfall in the Florida panhandle in October and then made its way north through the Carolinas — caused flooding and wind damage through central North Carolina. The tropical storm, which had been a hurricane when it made landfall in Florida, came just a few weeks after Hurricane Florence ravaged the eastern part of the state with flooding and wind damage.

The federal disaster designation from FEMA will allow city and county governments, state agencies, some non-profits and religious institutions to be paid back for money used to repair buildings and infrastructure.

“This is good news for cities, towns and counties that suffered damages from Michael, which came right on the heels of Hurricane Florence,” N.C. Gov. Roy Cooper, who requested the designation from FEMA, said in a statement. “Cleaning up from Michael took a lot of local government resources, and this will help communities recover those funds.”



(TNS) - More than four months after Hurricane Florence battered the state, rivers of waste are still flowing to landfills in eastern North Carolina in volumes that their managers say they have never before seen.

Uprooted trees, broken furniture, sodden carpets, soggy sheet rock, smashed fencing, crushed carports and moldy clothing make up the mix of items destroyed by the September storm and subsequent flooding.

The trash piling up at some sites may not be disposed of until summer — or perhaps not until next year. Caravans of trucks are bringing new waste daily, and solid waste workers are logging major overtime to keep up with the load.



Local emergency managers are looking for about 200 volunteers to be "crisis actors" and help make more realistic an evacuation drill in March.

Chatham Emergency Management Agency will conduct a full-scale exercise March 26 at the Coastal Georgia Center and the Savannah Civic Center. This exercise will test the Evacuation Assembly Area plan for the county. The plan is implemented when the public needs transportation assistance during a county evacuation order, as happened with Hurricanes Irma and Matthew. The last time the exercise was conducted was 2015.

Volunteers should be willing to play the role of actors, simulating the general population. The volunteers will be transported from the Coastal Georgia Center to the Civic Center, where they will be "screened and processed" before being returned to the Coastal Georgia Center. They will be transported to and from multiple times throughout the exercise, but will have opportunities to rest. Food will be provided at both sites.



Software attacks, theft of intellectual property or sabotage are just some of the many information security risks that organizations face. And the consequences can be huge. Most organizations have controls in place to protect them, but how can we ensure those controls are enough? The international reference guidelines for assessing information security controls have just been updated to help.

For any organization, information is one of its most valuable assets and data breaches can cost heavily in terms of lost business and cleaning up the damage. Thus, controls in place need to be rigorous enough to protect it, and monitored regularly to keep up with changing risks.

Developed by ISO and the International Electrotechnical Commission (IEC), ISO/IEC TS 27008, Information technology – Security techniques – Guidelines for the assessment of information security controls, provides guidance on assessing the controls in place to ensure they are fit for purpose, effective and efficient, and in line with company objectives.

The technical specification (TS) has recently been updated to align with new editions of other complementary standards on information security management, namely ISO/IEC 27000 (overview and vocabulary), ISO/IEC 27001 (requirements) and ISO/IEC 27002 (code of practice for information security controls), all of which are referenced within.



As a higher education administrator, you know better than anyone the importance of timely communication on a campus, especially in a crisis.

In 2018, we saw schools across the country suffer from violence, and together we had to accept that campuses are targets for what was once the unthinkable.

Then there are other risks such as severe weather conditions or even day-to-day communications that need to be addressed. The one thing that is clear is the importance of having effective communication strategies in place to ensure campus safety.

Don’t assume that your campus is immune to crisis. The number of shootings on or near college campuses increased by 153% between the 2001 and 2006 academic years. And shooting incidents are predicted to increase during the next decade Take action in 2019 to improve communication with your many stakeholders, including students, faculty and staff, families, community members, and others.

We have the tips you need to make better communication a reality for your campus in 2019.



So you live in Wisconsin, far away from any hurricanes and ocean storm surges. You don’t need flood insurance, right?

Wrong. The big thaw spreading across the Midwest is a perfect lesson for why you do, in fact, need flood insurance.

The Wall Street Journal reported that in Lone Rock, Wisconsin, temperatures rose 80 degrees in three days, from minus 39 to 41. That kind of temperature swing is a recipe for floods.

Most obviously, melting ice and snow can swell rivers. But especially worrisome are “ice jams,” which form when frozen rivers melt into large ice chunks that can lodge together and block the river’s flow. In the worst cases, these artificial dams cause serious flooding in the area around the river.



3 Predictions for 2019

From Google’s GDPR violation to data breaches happening just hours after the new year, 2019 is off to a crazy start, especially for risk managers. In anticipation of the months ahead, LogicGate CEO Matt Kunkel predicts what GRC professionals should be prepared for in 2019. 

There’s no doubt risk managers stayed busy in 2018. From the GDPR rollout in May to numerous data breaches, these events come as no surprise to industry observers. To industry pros, data breaches are no longer seen in terms of “if,” but “when.” Every year, we continue to see companies collect enormous volumes of personal data, increasing the pressure placed on risk managers. However, in 2018, companies were finally held accountable for failing to protect customer data – just ask Mark Zuckerberg.

Looking ahead, what can GRC professionals expect in 2019? Below, I discuss three issues GRC professionals should be prepared for in 2019.



Monday, 04 February 2019 16:53

The State Of GRC

Rick Cudworth and Abigail Worsfold from the Deloitte crisis and resilience team provide a review of the new PD CEN/TS 17091 European technical specification for crisis management, which was launched in December 2018. 

A new technical specification for crisis management calls for a more strategic approach to the discipline. PD CEN/TS 17091 ‘Crisis Management – Building a Strategic Capability’ is a welcome intervention designed to help organizations develop this important capability. In this article we highlight four specific areas where the new technical specification advances good practice and provides more detailed guidance:

Crisis Management as a strategic capability

The technical specification’s expanded title – ‘building a strategic capability’ – is significant.

First, when things go wrong, and inevitably they will at some point, responding effectively will help keep the organization on track. Research published by Aon and Pentland Analytics (Reputation Risk in the Cyber Age – The Impact on Shareholder Value, August 2018) shows that companies that effectively respond to a crisis will out-perform those that don’t in terms of shareholder value. Organizations that see crisis management as a strategic discipline, are more likely to respond effectively when a crisis occurs.



Align Your Risk Approach to Your Unique Business Realities

LockPath’s Colby Smith discusses the reasons an integrated approach to risk management is an imperative – chief among them digital processes, global business and a reliance on third parties.

Digital transformation, globalization and outsourcing have given rise to unprecedented productivity, innovation, efficiency, collaboration and knowledge. However, with these business improvements come new risks.

Modern business risks are multifaceted: they impact operations and compliance simultaneously, morph from cyber to supply chain risk or any number of combinations. The interdependencies created by digital systems, third parties and automation can lead to a cascade of negative incidents if the interplay of risk factors is not carefully considered and managed. Enterprises can no longer afford to silo risk management efforts. Too many blind spots and hazards lie in the space between departmentalized programs.

Integrated risk management (IRM) practices and technology solutions are designed to address an enterprise’s particular ecosystem of risks. Gartner defines integrated risk management (IRM) as “a set of practices and processes supported by a risk-aware culture and enabling technologies that improve decision-making and performance through an integrated view of how well an organization manages its unique sets of risks.” Gartner recently introduced a new Magic Quadrant for IRM, confirming its growing importance as an advanced approach to dealing with ever-changing combinations of cyber, operational, geopolitical, regulatory, legal and financial risks.



Friday, 01 February 2019 15:06

The Case For Integrated Risk Management

(TNS) - More than a dozen people bustled about the basement of The River - A Community Church in New Kensington on midday Wednesday.

Pots of chicken noodle and vegetable broth soups simmered on a stovetop while flaky, doughy biscuits baked in the oven below.

Some volunteers prepped a refreshment area offering a variety of drinks and snacks — freshly brewed coffee, bottled water, fruit juice, hot chocolate and made-from-scratch cookies as well as store-bought ones. Others set up several tables and chairs and put on display an array of donated items up for grabs — beanies, scarves, gloves, coats and baby blankets.

Shortly before 3 p.m., a pair of men lugged outside a large sign labeled “Warming Center” and planted it in the church’s snow-blanketed lawn with an arrow pointing passersby to the basement entrance.



A new report published by Lloyds explores the impacts and economic costs of a future highly effective ransomware attack and concludes that the global economy is not ready to deal with such an attack.

The report, ‘Bashe attack: Global infection by contagious malware’ explores a scenario in which a ransomware attack is launched through an infected email, which once opened is forwarded to all contacts and within 24 hours encrypts all data on nearly 30 million devices worldwide. 

The report estimates a cyber attack on this scale could cost $193bn and affect more than 600,000 businesses worldwide; and states that the global economy is underprepared for these types of incident, with 86 percent of the total economic losses uninsured, leaving an insurance gap of $166bn. 



(TNS) - Safety across the region was a priority Wednesday, Jan. 30, as arctic temperatures and wind chills forced temperatures to as low as 65 below zero.

Hundreds of flights were cancelled across the Midwest on Wednesday. At Chicago's O'Hare International Airport, over 600 flights — about half of the flights originating from the airport — were cancelled and over 600 flights into O'Hare were also cancelled, according to flight tracker FlightAware.

Holland resident and Holland United Church of Christ pastor Bryan Berghoef has been in Toronto for work, and said his flights to Chicago have been cancelled for the past three days.

"It's very cold here," Berghoef said. "It's one below zero and feels like 11 below. Fortunately, I am staying with friends, so not stuck in the airport or at a hotel."



Just like athletes, business continuity practitioners need to be mentally tough in order to perform well in the face of the many stresses that go with their role. In today’s post, we’ll look at some of the factors that make business continuity uniquely stressful and share some tips to help you execute at a high level even when you’re under pressure.

If there’s one thing you can say for sure about the Super Bowl this weekend, it’s that it will feature exceptional athletes and coaches striving to perform to the best of their ability under great pressure. The team that does the best job of maintaining its poise and focus will probably win.

In business continuity, we talk a lot about the need to make our organizations resilient, but we must also be resilient as individuals, leaders, and crisis managers. Just like in sports, our mental robustness will be a big factor in how well we perform.



Who doesn’t love a meeting? Well, quite a few of us, actually. Here Loulla-Mae Eleftheriou-Smith asks the experts for their advice on minimizing meeting time while maximizing results, whether that means standing gatherings or curated guest lists

Meetings, scheduled or impromptu, can be something of a minefield when they’re conducted badly. Getting pulled into team discussions, summoned for a company update, or even just grabbed for a quick catch-up that ends up taking 45 minutes is easily done. But let them overrun your day and it can feel as if you have no time left for your actual work.

With research from eShare(1) showing that the average worker attends 4.4 meetings a week – more than half of which are thought to be unnecessary – it’s easy to see why. But when meetings are done well, they benefit everyone in the room. Here we speak to a range of experts about their tips for making meetings more productive.



Thursday, 31 January 2019 16:07

How to Have More Productive Meetings

2018 brought no calm to the smartphone industry, and Apple’s Q1 2019 earnings are an indication that the trend will continue this year. After years of consistent double-digit growth rates, a slowdown in smartphone sales that started in 2017 continued in 2018. Some reasons behind this include:

  • Major economies slowed. In 2018, many of the world’s large economies began to lose steam (or even come to a halt). While the economic challenges in Latin America and some African countries have become familiar in the past few years, 2018 saw China’s GDP growth dip, as well. This has had a direct impact on smartphone demand in China and has affected the overall global smartphone picture.
  • Smartphone penetration in key markets reached saturation. Smartphone demand has slowed due to market saturation in most North American and European countries, so the primary opportunities there stem from device replacement (“replacement opportunity”). Forrester forecasts that these economies will add no more than 91 million new smartphone subscribers — just 11% of the total number of new smartphone subscribers that will be added through 2023. Moreover, increased prices mean that consumers will replace their phones less frequently, further slowing smartphone demand.


The secret to leading a remote team effectively comes down to communication and cultivating caring relationships. Etan Smallman reports

While a prized window seat, flexible working hours and access to a fancy coffee machine may all make us feel more positive about work, they pale in comparison to the importance of having a good boss.

According to Gallup’s 2017 State of the American Workplace, about half of US workers have left a job to get away from a terrible boss. And only 21% of workers think their performance “is managed in a way that motivates them to do outstanding work.” In contrast, other studies show that a boss who is able to engage his team can expect higher productivity levels and reduced costs from staff turnover.

If you manage a team, you may be asking yourself how you can be this second kind of leader – especially if you’re someone managing a team of remote workers. Luckily, best-selling author and self-styled "Doctor of Happiness" Andy Cope is on hand to help. He’s spent more than a decade studying the impact positivity has on inspiring others and argues that a leader’s job is not to inspire people, but to be inspired.

Following the release of his latest book, Leadership: the Multiplier Effect, he explores how this applies to the new flexible world of work.



Technology has changed virtually everything: the way we work, the way we live, the way we play, and, at a basic level, how humans relate to each other.

On the business front, it has fueled the power shift from institutions to customers, powered the emergence of startups and super-scaled platforms that are undercutting and remaking traditional markets, and changed the way employees work.

Technology’s impact is both revolutionary and unsurprising. Ironically, it has not (yet) changed the fundamental model of IT. But that is going to change.

It’s not just the inevitable force of technology. Here is a shortlist of dynamics that conspire to create a far different future for IT:



If you want to be a remote worker, you may need to convince your boss. Daniel Mobbs has the lowdown on the skills required, from effective time management to confident communications

While remote working is becoming increasingly popular, not everyone is a natural-born remote employee. Where some people are primed to excel beyond the four walls of the office HQ, others are more likely to crash and burn without plans and strategies in place for a new way of working. “Candidates must be honest about their own ability to handle the heightened responsibilities and expectations that accompany remote work,” says Anthony Curlo, CEO of IT recruiting and staff augmentation firm DaVinciTek(1).

The good news is that these skills and strategies can be learned and developed by almost anyone. Look at successful remote workers and you’ll see many of the same traits popping up again and again. So if you’re trying to convince your boss (or even yourself) that remote working is the right thing for you, start by honestly asking yourself how many of the characteristics below you share in common with your free-range colleagues.



Thursday, 31 January 2019 15:58

Do You Have What It Takes to Work Remotely?

(TNS) — Erie County officials stressed caution to Western New Yorkers during their first briefing of what could be another eventful weather day.

They advised to adhere to travel advisories issued in several municipalities, notably in the Northtowns, which has received the brunt of the storm. Travel advisories are in place for the city of Buffalo and northern Erie County.

"To underestimate this would be a mistake," said Greg Butcher, Erie County Deputy Commissioner for Homeland Security and Preparedness. "I think it’s the totality of all of the things brought together. The snow event itself, the high winds, mixed with the extremely cold temperatures are the things we need to be concerned about."



(TNS) — Camp Fire survivors Lisa Butcher and Randy Viehmeyer remember waking up one night to the screams of a nearby shelter resident reliving the nightmare of watching her dog burn alive.

Having bounced from one chaotic and sometimes dangerous shelter to another, the couple said they’ve experience a kind of volatile “hell” since their Paradise home burned down last November during the Camp Fire, the deadliest and most destructive wildfire in California history.

And with the final remaining evacuation shelter for victims set to close Thursday in Chico, their fate is once again up in the air.



(TNS) — Even before the worst of the polar vortex gripped the Midwest, emergency rooms here already had treated several patients with cold-weather-related injuries.

Now, the arctic air that's enveloping the Rock River Valley gets truly dangerous with wind chill values on Wednesday that could plummet as low as 54 degrees below zero, according to the National Weather Service.

The record low temperatures spurred dire warnings from meteorologists, widespread closures of schools and businesses and led to a disaster proclamation from Gov. J.B. Pritzker, who said a wide variety of state resources will be available to help communities affected by the winter storms and bitter cold weather.



It’s hard to imagine pitchers and catchers reporting in a mere 12 days while another polar vortex rips through the Midwest.  Arctic blasts plunging the thermometer to 27 degrees below zero in some states? It’s safe to say our friends in Minnesota won’t be throwing the baseball around in the backyard this week. 

But a baseball-less future is probably the least of your worries right now. Extreme cold is dangerous – and expensive, if the pipes in your house freeze and burst. Water damage could cost you as much as $5,000, if not more. 



(TNS) — Dangerously cold weather is expected to hit much of state through Thursday, which means folks ought to prepare for their own safety and the safety of animals.

According to the National Weather Service in Aberdeen, temperatures are expected to drop to 35 below in the area tonight, and that's not even the wind chill.

The record low is 32 below, which was set in 1916.

State Climatologist Laura Edwards said the area saw similar frigid weather in January 2014.

"We do see cold like this, but not every year. Again, that doesn't mean we can ignore how much it can harm people and animals," Edwards said.

"You can get frostbite in as little as 10 minutes. Being protected is the biggest concern," she said.



Forrester has just published our forecast for US tech employment and compensation (see “US Tech Talent Market Outlook: Low Unemployment And Rising Wages Present New Challenges for CIOs”). It has some foreboding news for CIOs and for tech vendors: Tech talent will be harder to find and more expensive over the next two years.

The good news is that the supply of tech workers has largely kept up with demand — annual wage growth for tech workers  has generally hovered between 2.0% and 2.5% since 2015. But the current data available for 2018 suggests that wage growth is starting to accelerate. This acceleration poses a special threat to CIOs, who could find themselves paying premiums for certain tech roles in high demand.

Here’s a summary of our forecast for the US tech labor market over the next two years:



(TNS) — Saying they feel an urgency to act fast, California officials this week will launch the main phase of wildfire debris removal in Butte County, scene of November’s devastating Camp Fire.

But a potential problem has emerged: Nearly half of the property owners in the hill country around Paradise have not given the government permission to enter their properties to do the work.

County officials this week said they are making an extra push to get the word out to people who own burned property, informing them that they are required to have their land cleaned of ash and other fire debris, either through a free state-run program or by hiring their own contractor and paying for it themselves.



(TNS) - Gov. Tony Evers declared a state of emergency in Wisconsin on Monday, because of the heavy snows that have fallen and the extreme cold still to come.

"I’m concerned about the safety and well-being of our residents as this major storm and bitter cold moves in," Evers said in a release.

The state of emergency authorizes the adjutant general of the Wisconsin National Guard to call up military personnel to active duty if the need arises, and for all state agencies to be available if called on.

This request came from Wisconsin Emergency Management in case Guard units are needed to assist with emergencies in any affected parts of the state.



(TNS) — Equipment owned by California's three largest utilities ignited more than 2,000 fires in three years — a timespan in which state regulators cited and fined the companies nine times for electrical safety violations.

How the state regulates utilities is under growing scrutiny following unprecedented wildfires suspected to have been caused by power line issues, blazes that have destroyed thousands of homes and killed dozens of people.

Lacking the manpower and sophisticated technology necessary to monitor more than 250,000 miles of power lines across the state, regulators rely on something of an honor system, with utilities responsible for ensuring all trees and vegetation are cut back far enough from electrical equipment before the onset of dry, high-fire danger conditions.



You’d like to think that if you would just come up with a few good ideas, work hard, and have a stroke of luck, then your business would succeed. Alas, it’s not that simple. There are things that could do great damage to your company’s prosperity, and, what’s worse, is that they could come out of the blue. You might not even be aware that they’re slowly taking place, yet there they are, doing damage to your business. Below, we take a look at some of these threats, which can frequently fly under the radar.



A Landmark Settlement with Lessons Learned for Compliance Officers

Walgreens reached a settlement on Tuesday, concluding a six-year investigation into the company’s pharmacy and drug-pricing practices, initiated by a whistleblower and former pharmacy manager. CCI reports on the specifics of the case and the historic settlement.

The attorney representing the whistleblower in a historic case against Walgreens says the landmark settlement sends a clear message to compliance officers everywhere.

“Compliance officers, your job is vital,” said Andrew M. Beato of Stein Mitchell Beato & Missner LLP, speaking by phone Wednesday to Corporate Compliance Insights. “Compliance officers are the last line of defense (for a company) before it goes to this type of outcome.”

Walgreens will pay $60 million in the largest-ever settlement by a pharmacy chain for overcharging for drugs.  Announced Tuesday, the settlement was the result of a complaint filed in 2012 by a former pharmacy manager in Florida.



Monday, 28 January 2019 14:55

The Walgreens Whistleblower

Safety is the first priority for any company that seeks to protect employees and customers. Knowing the hazards that exist in workplace offices, equipment, and machinery is the first step toward preventing injury or even death.

The Occupational Safety and Health Organization (OSHA) publishes a list of its most frequently cited violations in the workplace. By examining this list, employers can analyze the dangers inherent in their workplaces and plan to avoid them.

“Knowing how workers are hurt can go a long way toward keeping them safe,” said National Safety Council President and CEO Deborah A.P. Hersman. “The OSHA Top 10 list calls out areas that require increased vigilance to ensure everyone goes home safely each day.”



Friday, 25 January 2019 16:21


The best business continuity managers run their programs like entrepreneurs running their own companies. In today’s post, I’ll share seven tips to help you adopt this world-beating, program-enhancing attitude.

One of the best models of a good way to run a BCM program is that of an independent entrepreneur. This is because when you’re leading a business, you can’t hide. You have to deliver the goods, take responsibility for your work, and satisfy your customers.

If you take this approach to your role as a BCM manager, you and staff will find greater fulfillment in your work, your managers and stakeholders will be pleased and impressed, your program will thrive, and your company will be better protected.



3 Trends and Predictions

In the year ahead, companies will need to find meaningful and measurable ways to align and integrate risk management with core business objectives to pursue and meet their company’s goals. LockPath’s Sam Abadir discusses how, as organizations of all sizes and types undertake this vital work, he sees the risk ecosystem, increased board engagement and compliance accountability as three trends that will challenge their progress and innovation.

2018 was quite the year. Between regulatory regimes, global competition and cyber threats, the cautionary tales of what can happen dominated headlines. The Equifax investigation findings were disturbing, Google had multiple cascading incidents and the Marriott breach effects continue to unfold.

Maintaining, growing and evolving the ecosystem of digital equipment, services and data is central to the modern enterprise; however, digital business moves fast and creates risk in its wake. The ability to remain viable in today’s business depends greatly on the effectiveness of risk management and compliance programs, as well as digital systems and processes, data governance and security practices.



Friday, 25 January 2019 16:18

The Key To Risk Management Success In 2019

As the volume and variety of cyber attacks on businesses continue to grow, the need for better incident response has never been greater. Stephen Moore discusses how to build an effective CSIRT and the role it can play in protecting an enterprise in the event of a breach.

A few years ago, the idea of a dedicated computer security incident response team (CSIRT) may have seemed luxurious. Fast forward to the present day and for many it’s become essential. A CSIRT differs from a traditional security operations centre /center (SOC), which focuses purely on threat detection and analysis. Instead, a CSIRT is a cross-functional response team, consisting of specialists that can deal with every aspect of a security incident, including members of the SOC team. The effort could include the technical aspects of a breach, assisting legal, managing internal communications, and even creating content for those that must field media enquiries.

Key roles and responsibilities within a CSIRT

In addition to the conventional duties of a SOC, a CSIRT must also fulfil a variety of non-technical, but equally important roles and responsibilities. This requires a much wider set of skills, and getting the right balance of personnel is key. Some members may be full-time, while others are only called in occasionally, but they will all bring key skills to the table if and when they are needed.

At a minimum, an effective CSIRT will contain the following members:



In a volatile market environment and with the edict to ‘do more with less’, many financial institutions are beginning efforts to re-engineer their risk management programs, according to a new survey by Deloitte.

70 percent of the financial services executives surveyed said their institutions have either recently completed an update of their risk management program or have one in progress, while an additional 12 percent said they are planning to undertake such a renewal effort. A big part of this revitalization will be leveraging emerging technologies, with 48 percent planning to modernize their risk infrastructure by employing new technologies such as robotic process automation (RPA), cognitive analytics, and cloud computing.

"Financial institutions face a formidable set of challenges posed by today's more complex and uncertain risk environment," said Edward Hida, a partner with Deloitte Risk and Financial Advisory at Deloitte US and the author of the report. "With budget cuts common — and a big focus on effectiveness and efficiency as the torrent of regulatory change has slowed — this will require institutions to rethink their traditional assumptions and employ fundamentally new approaches."



(TNS) - Anchorage property values increased slightly overall in 2018, though a few hundred homeowners may be looking at a lower home value because of damage caused by the Nov. 30 earthquake and its aftershocks, city assessors say.

About a week ago, the Anchorage property-tax appraisal office began mailing tens of thousands of assessment notices known as green cards. They show the city’s estimate of what a property would sell for on Jan. 1. It’s a critical step in determining a household or business property tax bill, which pays for essential city services such as police, firefighters and snowplowing.

This year, those with significant damage from the 7.0 earthquake may be asked to pay less.

In recent weeks, property appraisers worked with building-safety officials to identify damaged buildings. Officials have been using a system of green, yellow and red tags to indicate damage. Red-tagged buildings are unsafe to enter; a yellow tag means limited occupancy.



(TNS) — Matt Brown, emergency services chief for Allegheny County, Wednesday urged municipalities battered by landslides and recurring flooding to compete for a new pot of federal funding.

Up to $10 million in hazard-mitigation grants has been made available statewide, despite FEMA denying Western Pennsylvania’s request to declare a disaster because of a spate landslides and related challenges last year.

FEMA denied the local request but granted federal disaster status to 10 counties in Eastern Pennsylvania in December — and 15 percent of that funding, or about $9 million to $10 million, will be split among municipalities in need scattered across the state.



Alex Janković claims that some managed service providers have successfully managed to equate business continuity with IT disaster recovery, resulting, at best, in confusion among those new to the profession and, at worse, the development of business continuity plans that are not fit for purpose.

There is something which bothers me as a management consultant in the business continuity and information technology fields. Have you tried to search for the terms ‘Business Continuity’ or ‘Business Continuity Planning’ on Google or Bing search engines recently? Please do and the results may surprise you. Once you skip over a few Google ads and relevant, but not local, articles, you will find link after link to articles written by local managed service providers (MSPs).

If you are wondering what an MSP is, TechTarget defines it as “a company that remotely manages a customer's IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model.”, but I digress.

If you are brave enough and decide to click on any of those searched links, you will be met with a carefully designed and written corporate landing page. They will all have some very high-level, but somewhat relevant business continuity related jargon, but in the first few sentences the narrative will change from business continuity to IT disaster recovery. Furthermore, if you care to continue reading, these MSPs will start to pitch whatever product or vendor they are licenced to sell and distribute. The web-page message, tone, and the focus are ultimately geared around the capabilities of that product, and not necessarily anyhow related to the business continuity planning process or methodology itself. On top of that, MSPs will also suggest helping your organization develop business continuity or IT disaster recovery plans, which I am sure will be geared around the products they try to sell you, and will probably be developed without truly understanding the ins-and-outs and the complexity of your business.



(TNS) - At least five people are dead after an armed man opened fire inside a SunTrust bank in Sebring on Wednesday afternoon, prompting a standoff, authorities said.

Sebring Police Chief Karl Hoglund said the victims were “senselessly murdered” by Zephen Xaver, 21, who surrendered after a Highlands County Sheriff’s Office SWAT team entered the bank.

“Today’s been a tragic day in our community,” Hoglund said. “We’ve suffered a significant loss at the hands of a senseless criminal doing a senseless crime.”

Officials have not publicly identified the victims.



WASHINGTON, DC – AOAC INTERNATIONAL (AOAC) and the International Organization for Standardization (ISO) announce that they have entered into a cooperation agreement for the joint development and approval of common standards and methods. The partnership significantly increases the global relevance and impact of AOAC/ISO standards and methods.

“The AOAC and ISO partnership broadens global acceptance of standards and methods, benefitting all stakeholders and consumers,” said Brad Goskowicz, CEO of Microbiologics, Inc. and President of AOAC INTERNATIONAL. “AOAC and ISO’s commitment and global leadership pave the way for methods to ultimately advance to the Codex method process for consideration as International Standards.”

ISO Secretary-General Sergio Mujica added, “ISO’s partnerships with other relevant organizations are extremely important, as we believe that the best way to meet market needs and provide global solutions is by bringing together the world’s best experts. This agreement will therefore benefit the industry through the joint development of standards that are globally accepted and recognized by Codex. We look forward to collaborating further with AOAC via this agreement to produce effective International Standards.”



Thursday, 24 January 2019 15:18

AOAC and ISO announce cooperation agreement

The Crisis Communications Team (CCT) is the team of professionals within the organization that manages the communication function during a crisis.

This team works closely with the Crisis Management Team (CMT) that makes the important decisions pertaining to crisis communications, business continuity and disaster recovery, the three important management activities that need to be undertaken efficiently and effectively during a crisis.

How do you get the CCT to do its job? Well, for it to start working it needs to be switched on.

Sounds simple, right? Sadly, this is where things often go wrong in how crisis communication plans are conceived. What follows are four considerations for anyone designing a CCT activation processes.



If you’ve adopted a mass notification system, you’ve taken an important step towards crisis readiness.

To ensure a successful response to your next emergency, take the time needed now to prepare and fully communicate your emergency response plan to ensure that your crisis communication is quick, responsive, accurate, and efficient.

Preparation is important in any scenario, but especially in emergency response planning and execution. This means having plans in place for known threats, establishing communication strategies, training staff and more. It can truly make the difference between experiencing utter chaos and assisting in establishing community safety.

A well know example of this comes from evaluating the response to Hurricane Katrina. FEMA has highlighted several preparation-related challenges that came to define Katrina:



(TNS) - The next time a natural disaster threatens the Lincoln area, those coordinating emergency management will be working from a new home in south Lincoln.

In the northwest corner of the Lancaster County Youth Services Center at 1200 Radcliff St., the Lincoln-Lancaster County Emergency Operations Center provides a more spacious hub better equipped to handle a crisis for days on end if need be.

The facility includes beds and showers in the event emergency management officials need to stay at the center for extended periods of time.

“It’s essentially a small dorm or a quiet room,” said Director of Emergency Management Jim Davidsaver, a former Lincoln Police Department captain. “If you just need a break, the new facility gives you that opportunity.”



(TNS) — After spending an exorbitant amount of money on food for county workers during Hurricane Irma, Sarasota County has signed a catering contract with a much cheaper vendor should another natural disaster strike.

The County Commission recently approved a three-year contract with Metz Culinary Management, of Sarasota, as its primary vendor, and Mattison's Catering, also of Sarasota, as its secondary vendor to provide meals for county workers stationed in the Emergency Operations Center in the event of a disaster. The contract with Metz would cost $30.50 for four meals a day per person around the clock — compared with the $26 that the county spent during Irma per person, per meal, ultimately costing taxpayers $130,000.

While county residents hunkered down at home or in shelters as Irma thrashed the region on its trek up the Florida peninsula on Sept. 10 and 11 in 2017, about 400 county employees enjoyed Mattison's Catering, county records show. Under the $130,000 deal, Mattison's staff prepared and delivered enough food to serve up to 5,000 meals from lunch on the Saturday before the storm hit through lunch the following Tuesday — a cost of $26 per person, per meal, according to the purchase order.



What Recent News Means for the Future

The compliance landscape is changing, necessitating changes from the compliance profession as well. A team of experts from CyberSaint discuss what compliance practitioners can expect in the year ahead.

Regardless of experience or background, 2019 will not be an easy year for information security. In fact, we realize it’s only going to get more complicated. However, what we are excited to see is the awareness that the breaches of 2018 have brought to information security – how more and more senior executives are realizing that information security needs to be treated as a true business function – and 2019 will only see more of that.

Regulatory Landscape

As constituents become more technology literate, we will start to see regulatory bodies ramping up security compliance enforcement for the public and private sectors. Along with the expansion of existing regulations, we will also see new cyber regulations come into fruition. While we may not see U.S. regulations similar to GDPR on a federal level in 2019, these conversations around privacy regulation will only become more notable. What we are seeing already is the expansion of the DFARS mandate to encompass all aspects of the federal government, going beyond the Department of Defense.



In our hyper-connected world, IT security covers not just our data but virtually everything that moves – including machinery. Cyber-attacks or IT malfunctions in manufacturing can pose risks to the safety measures in place, thus having an impact on production and people. New international guidance to identify and address such risks has just been published.

“Smart” manufacturing, or that which takes advantage of Internet and digital technology, allows for seamless production and integration across the entire value chain. It also allows for parameters – such as speed, force and temperature – to be controlled remotely. The benefits are many, including being able to track performance and usage and improved efficiencies, but it also exacerbates the risk of IT security threats.

Increasing the speed or force of a machine to dangerous levels, or lowering cooking temperatures to result in food contamination, are just some examples of where cyber-attacks can not only disrupt manufacturing but pose serious risks to us. Happily, a new ISO technical report (TR) has just been published to help manufacturers prepare for and mitigate these risks.



Rapid growth in the use of public cloud services for core business operations is changing the technological landscape. But in the rush towards taking advantage of the agility that public cloud offers are organizations in danger of neglecting a core area of business continuity?

In the last eighteen months the acceleration of public cloud services has been overwhelming. It is no coincidence that UK based instances of Microsoft Azure and Amazon Web Service instances made more organizations willing to move workloads and data into public cloud services, and has seen these services go from strength to strength.

It is estimated that more than 60 percent of organizations use Office365 email services, for part, if not all of their messaging users. The most popular public cloud services like Microsoft Office365, Azure, Salesforce, Google Suite and Amazon Web Services have lowered the barrier to entry for small and medium sized businesses to access IT, and many larger organizations have also seen the benefits of moving to a pay monthly model. Getting access to these professional business applications, billed in a low-cost subscription model is helping accelerate business growth and agility.



A litany of disruptions and corporate scandals in 2018 showed that while making profits, organizations will be held responsible for their actions in an increasing shift towards more ethical business practices

Last year did not turn out to be great for businesses: there were mounting data privacy concerns around the globe; cyberattacks continued to hobble cities and disrupt business operations in the US; and Brexit uncertainty left UK industries worried. Meanwhile, shocking bank and corporate scandals sparked renewed regulatory interest in Europe, India, and Japan.

Amidst these larger issues, several new laws and regulations came into effect, adding to the complexity of an already challenging business landscape.

With so much that happened over the past year, here are some of the events and stories that stood out:



How prepared are your employees and organization to navigate the next major blizzard?

If you don’t know how you will keep employees informed and safe while you maintain business continuity, you can do more.

Imagine your employees waking up in the morning after a snowstorm has hit. The Weather Channel details the icy roads and slippery streets. Social media is crowded with photos and updates of vehicles buried in the snow and children celebrating the snow day. But what about work? Your employees must decide if they should brave the hazardous roads or stay home and potentially miss a workday. With no incoming messages and calls that lead only to voicemail, your employees become confused and frustrated.

Keep in mind that if your employees don’t know what to do in the event of a snow or ice storm, they can put themselves at risk. They may attempt to report to work but find the office closed. Alternatively, they could get into an accident on the way. Please don’t put them, or yourself, in that position.



(TNS) - The Northridge earthquake that hit 25 years ago offered alarming evidence of how vulnerable many types of buildings are to collapse from major shaking.

It toppled hundreds of apartments, smashed brittle concrete structures and tore apart brick buildings.

Since then, some cities have taken significant steps to make those buildings safer by requiring costly retrofitting aimed at protecting those inside and preserving the housing supply.

But many others have ignored the seismic threat. And that has created an uneven landscape that in the coming years will leave some cities significantly better prepared to withstand a big quake than others.



For the past five years Continuity Central has conducted an online survey asking business continuity professionals about their expectations for the year ahead. This article provides the results of the most recent survey and identifies some interesting changes from previous years…


134 survey responses were received, with the majority (78.4 percent) being from large organizations (companies with more than 250 employees). 12.7 percent were from small organizations (50 or less employees) and 8.9 percent were from medium sized organizations (51 to 250 employees).

The highest percentage of respondents was from the USA (38.5 percent), followed by the UK (23.1 percent). Significant numbers of responses were also received from Canada (6.1 percent) and Australasia (5.4 percent).

Change levels

The survey asked respondents: ‘What level of changes do you expect to see in the way your organization manages business continuity during 2019?’

12 percent of respondents expect to see no change in the way their organization manages business continuity. 54.1 percent expect to see small changes, whilst a third (33.9 percent) are anticipating large changes.

The 88 percent of respondents expecting to see changes were asked to provide details of the one area that is likely to have the biggest impact on business continuity practices or strategies within their organization. Key themes that emerged were as follows:
‘Making major revisions to BCM strategies and/or BCP(s)’ topped the list of changes that business continuity managers expected to see in 2018 and, in 2019, this was again top of the list, with 22 percent of respondents saying that this was the biggest change they expected to see.



(TNS) — A critical emergency alert system designed to warn UC Davis students and staff failed to fully notify the campus until more than an hour after Davis police Officer Natalie Corona was shot and killed blocks from the university, officials announced, calling the breakdown “unacceptable.”

The WarnMe-Aggie Alert sends text and email messages to UC Davis students and staff and is designed to alert 70,000 people. But the system initially notified only a fraction of those people about the events unfolding less than a mile from the campus and locked campus public safety officials out of some notification lists.

“The system failure we saw on January 10 was unacceptable and we will take all necessary measures to ensure 100 percent performance in the future,” said UC Davis Chancellor Gary S. May in a statement Tuesday.

The chancellor’s downtown Davis residence is also just blocks away from where the 22-year-old Corona was gunned down and where others were sent fleeing when Kevin Limbaugh opened fire from his bicycle as the rookie officer responded to a traffic stop.



Business continuity practitioners have plenty of reasons to be advocates for good fire and life safety practices at their organizations, even if that’s not one of their core responsibilities.

In today’s post, we’ll share 13 tips business continuity management (BCM) professionals can follow to make sure their companies are doing what they should to promote fire and life safety for their staff and facilities.



Moving Beyond Day-to-Day Data Cleansing

In the financial services industry, regulation on due process and fit-for-purpose data has grown increasingly prescriptive, and the risks of failing to implement a data quality policy can be far-reaching. In this article, Boyke Baboelal of Asset Control looks at how organizations can overcome these challenges by establishing and implementing an effective data quality framework consisting of data identification and risk assessment, data controls and data management.

Too many financial services organizations fail to implement effective data quality and risk management policies. When data comes in, they typically validate and cleanse it first before distributing it more widely. The emphasis is on preventing downstream systems from receiving erroneous data. That’s important, but by focusing on ad hoc incident resolution, organizations struggle to identify and address recurring data quality problems in a structural way.

To rectify this, they need to the ability to more continuously carry out analysis targeted at understanding their data quality and reporting on it over time. Very few organizations across the industry are currently doing this, and that’s a significant problem. After all, however much data cleansing an organization does, if it fails to track what was done in the past, it will not know how often specific data items contained gaps, completeness or accuracy issues, nor understand where those issues are most intensively clustered. Financial markets are in constant flux and can be fickle, and rules that screen data need to be periodically reassessed.



(TNS) — Tyler Cooper had a pile of work he needed to tackle at his desk in John Deere’s Cary office on Tuesday, but he and some of his coworkers decided to spend the day doing housework instead.

They boarded a bus in the early morning and headed to the rural Whitestocking community outside Burgaw, a section of Pender County where the Cape Fear River ran 10 feet deep across the landscape during flooding from Hurricane Florence last September.

They climbed into coveralls, put on protective goggles and breathing masks, and crawled under a house to start yanking out insulation still damp from the flood.

“There were a lot of people impacted by the storm,” Cooper said, dragging out torn sheets of ruined yellow fluff. “I just wanted to help out.”

Tens of thousands of homes across Eastern North Carolina were damaged by floodwaters from the storm, and five months later, many still have not been stripped to the studs so they can dry out and be rebuilt.



No business owner wants to think about a violent event happening at their workplace, but each year, more than 2 million American employees report having been a victim of various types of workplace violence. According the U.S. Bureau of Labor Statistics, 409 workers were fatally injured in work-related attacks in 2014. To put that into perspective, that’s about 16 percent of the 4,821 workplace fatalities in the same year.

What Are The Main Types Of Workplace Violence?

OSHA defines workplace violence as “any act or threat of physical violence, harassment, intimidation, or other threatening disruptive behavior that occurs at the work site. It ranges from threats and verbal abuse to physical assaults and even homicide. It can affect and involve employees, clients, customers, and visitors.”

The National Institute for Occupational Safety and Health reports that the types of workplace violence can be categorized into four buckets:



Thursday, 17 January 2019 14:48


Page 1 of 2