DRJ Fall 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 32, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (323)

One of the biggest decisions companies face in conducting a Business Impact Analysis (BIA) is what use, if any, they will make of software in doing it. In today’s post we’ll look at the main software options available for doing BIAs, discuss which work best for which types of organizations, and share some tips that can help you succeed no matter what approach you take to using software.


In broad terms, there are five approaches companies can take in using software to do their BIAs.

As a reminder, a BIA is the analysis organizations conduct of their business units to determine which processes are the most critically time sensitive, so they can make well-informed decisions in doing their recovery planning.



CISOs must consider reputation, resiliency, and regulatory impact to establish their organization's guidelines around what data matters most.

Today's CIOs are the stewards of company data, responsible for its health and performance as well as maintenance of the availability, speed, and resiliency their stakeholders expect. CISOs, however, sometimes serve as emergency room doctors for their company's data. Their role is to think about worst-case scenarios, diagnose the severity of incidents, and jump in when incidents happen or are likely. Their first priority is to keep patients alive, but keeping them healthy is worth bonus points.

Like ER doctors, CISOs need rapid prioritization tied to the health of the business to effectively triage incidents. To establish each organization's guidelines around what data matters most every CISO must consider reputation, resiliency, and regulatory impact.



Friday, 17 May 2019 16:24

The Data Problem in Security

The recent merger of CloudBees and Electric Cloud is a sign of the times in the world of DevOps as integrated DevOps solutions come back into vogue. Not too recently, this would have been looked down upon as a step in the wrong direction when it comes to innovation and providing value to developers. But why is that? What’s wrong with an integrated toolchain, and why is it taking so long for vendors and users to come around to them or for vendors to offer them? After all, DevOps as a term itself has been around for about 10 years, so what’s the sticking point?

The Dawn Of The Integrated Toolchain

To understand these questions, we need to set the Wayback Machine to the decade of the ’90s, when the full stack developer automation toolchain was being born. Source code repositories certainly existed, but the automation of continuous integration, unit test, and deployment did not. Those types of automata would come later in the early 2000s as teams from HP, IBM, Micro Focus, and Microsoft created full stack automation tools that managed source code integration, executed unit tests, automated functional tests, and packaged software in a manner that was ready for production. This all sounded great on paper, but it was expensive, not extensible, and captive — meaning once you bought into this toolchain, there was no easy way to get out. Proprietary standards and integration points made it difficult at best to variate from the prescribed toolchain. These tools were also designed to be managed by IT administrators and not their users, the developers writing and building the code.



Shuffling resources, adding administrative process, and creating a competition and incentive system will do little to grow and mature the talent we need to meet the cybersecurity challenges we face.

The recent Executive Order on America’s Cybersecurity Workforce is intended to bolster public sector cybersecurity talent and improve our ability to hire, train, and retain a skilled workforce. Unfortunately, it ignores the real challenges we face in securing our public infrastructure: high turnover, outdated models, and an excess of administrative processes. Instead, the EO focuses on a series of relatively superficial initiatives seemingly designed to get people more excited about cybersecurity. These include:

• A cybersecurity rotational program
• A common skill set lexicon/taxonomy based on the NICE framework
• An annual cybersecurity competition with financial and other rewards for civilian and military participants 
• An annual cyber education award presented to elementary and secondary school educators
• A skills test to evaluate cyber aptitude in the public sector workforce

While it's great to see the continued focus on addressing our substantial national cyber challenges, this Executive Order is an attempt to address a severe talent shortage by shuffling resources, adding administrative process, and creating a competition and incentive system that will do little to grow and mature the cyber labor force. 



Back in November, Forrester outlined its 2019 predictions for a set of hot emerging technologies. We identified which markets were likely to command big investments in the new year and even predicted that GE would turn a corner this year. Let’s see how we did so far with a few of them.

Additive manufacturing will save General Electric. 2019 has delivered glimmers of hope for GE: Investors have started showing faith in the company’s new leadership. GE’s stock price is up 42% year-to-date, and optimism is building in GE’s aviation unit. With a pipeline of over 60 proprietary 3D-printed parts, they hope to literally “reinvent” the engine. Manufacturing these parts with additive methods allows GE to cut out some of its traditional suppliers and win contracts previously held by its competitors. To show off the technology, GE even produced a set of 3D-printed gowns at this year’s Met Gala, which resulted in the most positive press it received in quite some time. Things also look good for the additive manufacturing market, as it blew past its entire 2018 investment total in the first quarter of this year with $445M.



Companies promising the safe return of data sans ransom payment secretly pass Bitcoin to attackers and charge clients added fees.

A new report sheds light on the practices of two US data recovery firms, Proven Data Recovery and MonsterCloud, both of which paid ransomware attackers and charged victims extra fees.

ProPublica researchers were able to trace four payments from a Bitcoin wallet controlled by Proven Data to a wallet controlled by the operators of SamSam ransomware, which caused millions of dollars in damages to cities and businesses across the US. Payments to this wallet, and another connected to the attackers, were banned by the US Treasury Department due to sanctions on Iran, explained former Proven Data employee Jonathan Storfer to researchers.

Proven Data claims to unlock ransomware victims' data using its own technology. Storfer and an FBI affidavit say otherwise: The company instead paid ransom to obtain decryption tools. MonsterCloud, another data recovery firm that claims to employ its own recovery practices, also pays ransoms — without telling the victims, some of which are law enforcement offices.



Mobile apps have become the touchpoint of choice for millions of people to manage their finances, and Forrester regularly reviews those of leading banks. We just published our latest evaluations of the apps of the big five Canadian banks: BMO, CIBC, RBC, Scotiabank, and TD Canada Trust.

Overall, they’ve raised the bar, striking a good balance between delivering robust, high-value functionality and ensuring that it’s easy for customers to get that value with a strong user experience. The top two banks in our review, CIBC and RBC, both made significant improvements to their app user experience (UX) over the past year by focusing on streamlining navigation and workflows. But our analysis also revealed ways all banks can — and should — improve, such as:

Banks should give customers a better view of their financial health. Banks we reviewed don’t provide external account aggregation, and they put the burden on the user to stay on top of their monthly inflows and outflows. They don’t offer useful features such as an account history view that displays projected balances after scheduled transactions hit the account — something leading banks in other regions of the world (like Europe and the US) do offer.



Learn about some of the latest findings on the devastation from a hurricane, and how to prepare your business to withstand this natural catastrophe. Read this infographic by Agility Recovery.

Agility HurricaneInfographic

Thursday, 16 May 2019 16:06

The Biggest Hurricane Risk?

What will happen to the plastic bag you threw away with lunch today? Will it sit in a landfill, clog a municipal sanitation system, or end up in your seafood? Concern over this question has helped spur the rise of the new and rapidly growing cultural trend of people aiming to live ‘Zero Waste’. The momentum of this movement has been fueled in part by an international recycling crisis between the United States and China, as described in this slightly grim article, Is this the End of Recycling?

Seeing images of injured marine animals or aerial footage of the Great Pacific Garbage Patch, shows us just how much damage this unsolved problem can cause. We can collect data from events that are occurring today to predict trends in consumption and waste reduction. We can track pilot programs of composting and trash reduction and honestly evaluate the results.

All of this sounds negative, but there is a lot of good news! More and more people are prepared to take drastic action to solve the waste and recycling problems that our country will face in the future. Like business strategies used in Business Continuity and Disaster Recovery, the Zero Waste movement tries to anticipate a future problem and attempt to mitigate its effects before they happen. To do this, we must rely on tracking real data as it occurs and test our solutions, before they become critical to operations.



City living is on the rise, having gone from 751 million of the world’s population in 1950 to 4.2 billion in 2018. What’s more, it’s expected to reach 6.7 billion in 20501. How can cities adapt and prepare to ensure they provide adequate resources and a sustainable future? They can’t improve what they can’t measure. The latest in the ISO series of standards for smart cities aims to help.

The ISO 37100 range of International Standards helps communities adopt strategies to become more sustainable and resilient. The newest in the series and just published, ISO 37122, Sustainable cities and communities – Indicators for smart cities, gives cities a set of indicators for measuring their performance across a number of areas, allowing them to draw comparative lessons from other cities around the world and find innovative solutions to the challenges they face.

The standard will complement ISO 37120, Sustainable cities and communities – Indicators for city services and quality of life, which outlines key measurements for evaluating a city’s service delivery and quality of life. Together, they form a set of standardized indicators that provide a uniform approach to what is measured, and how that measurement is to be undertaken, that can be compared across city and country. The standards also provide guidance to cities on how to assess their performance towards contributing to the United Nations Sustainable Development Goals, the global roadmap for a more sustainable world.



Success in the communications services sector is indeed a capricious piece of cheese. In it, every new technology advancement brings new business models, new security and sociopolitical debates, brand new industries of disruptors, and even new job roles for man and machine. As new technologies mature, the distinction between technology, media and telecommunications industries blur. Navigating this space to sustain growth and competitive edge is no easy task for the CxO. Decision makers need to wear many hats to be successful – that of a technologist, financial analyst, investment visionary and even showman.

If you are an Infrastructure and Operations professional, the challenge is multidimensional. You are called upon to address some of the most important trends and technology progressions ever made and implement them effectively into the core functioning of your organization. The advent of telecom technologies, software enabled everything, new media, and others are just the tips of icebergs that usher in multiple layers of possibilities to your operations.

The recently evolved communications services sector has cast some view on this evolving structure. On September 2018, MSCI expanded the Telecommunication Services sector of its market-defining Global Industry Classification Standard (GICS) to include companies from the Consumer Discretionary and Information Technology sectors and was renamed Communication Services. This means home entertainment software companies such as Electronics Arts, social media and search companies such as Facebook and Google, entertainment companies such as Netflix and Disney are all in the same boat now as AT&T and Verizon.

Suddenly the good, the bad, the ugly, the bully, the legends and the hungry neighborhood teenagers are all aiming for the same pie. Further, with software eating into every sector, each player is also hoping to gain muscle for the future by fishing into the same talent pool at the same time.



Millions of websites have been compromised, but the most likely malware isn't cyptomining: it's quietly stealing files and redirecting traffic, a new Sitelock report shows.

Websites suffer an average of 62 serious attack threats per day -- an average of 376 million per day, according to a new study of more than 6 million websites worldwide.

"Even though the numbers seems a little small, 62 attacks is still a pretty big number," says Monique Becenti, product and channel marketing specialist at SiteLock, which published the study in a report today.

Those attacks weren't concentrated in ransomware and cryptomining malware, but in such "classic" techniques as backdoors, shells, and JavaScript files. The JavaScript attacks are notable because they tend not to directly attack the website, but to hijack visitor traffic and send them to alternate, illegitimate destinations.



(TNS) — The Federal Emergency Management Agency has approved nearly $1.7 million to reimburse Sarasota County for the costs of debris removal from Hurricane Irma under FEMA's Public Assistance Program.

Funding for this Public Assistance project is authorized under Sections 403 of the Robert T. Stafford Act for Florida to cover Hurricane Irma-related expenses, reimbursing eligible applicants for the cost of debris removal; life-saving emergency protective measures; and the repair, replacement or restoration of disaster-damaged facilities like buildings, roads and utilities.

The money is sent to the state of Florida to assist Sarasota County with the costs of federally declared disasters or emergencies. The Florida Division of Emergency Management works with FEMA on the reimbursement. Sarasota County estimated it had more than 250,000 cubic yards of debris.



Tweet suggests possible screenshot of stolen city documents and credentials in the wake of attack that took down city servers last week.

A mysterious and newly created Twitter account on May 12 posted what purports to be a screenshot of sensitive documents and user credentials from the city of Baltimore, which was hit late last week by a major ransomware attack.

Researchers at Armor who have been investigating the so-called Robbinhood ransomware malware used in the attack on the city discovered the post. They say it could either be from the attacker, a city employee, someone with access to the documents — or even be just a hoax. The city is still recovering from the May 7 attack, which has disrupted everything from real estate transactions awaiting deeds, bill payments for residents, and services such as email and telecommunications.

Ransomware attacks typically are all about making money: Attackers demand a fee to decrypt victims' files they have accessed and encrypted. Whether the tweet came from the attackers trying to put the squeeze on the city to pay up or threatening to abuse the kidnapped information is unclear.                              



With an article prepared for Business Continuity Awareness Week, Ryan Weeks, chief information security officer at Datto shares five tips that business managers and IT teams should follow to help ensure that disaster recovery testing efforts are effective.

Having a solid disaster recovery (DR) strategy in place is imperative – but if you don’t test it regularly, you still risk your business being hit hard if ransomware strikes or if there is a system outage. The purpose of IT disaster recovery testing is to pinpoint and fix any flaws in your DR plan well before you find yourself in a real disaster scenario.

To do this, you need to thoroughly scrutinize how well your plan performs, and allow enough time to resolve any issues before they impact the ability to restore operations in case of an emergency. Scheduled and frequent testing is the only way to be certain your organization can be back up and running quickly following an outage.

To help ensure your testing efforts are effective, follow these five key steps:



Just as every organization security team's needs are unique, so are the reasons for the shortage of candidates for open positions. Here are five strategies to help you close the gap.

For the past several years, security operations teams (SOCs) have consistently reported that one of the biggest obstacles they face is the lack of qualified candidates for open positions. With the increasing volume and sophistication of threats facing organizations, this problem has evolved from an inconvenience to a full-blown epidemic.

According to (ISC)2 research, the shortage of cybersecurity professionals is currently close to 3 million globally and is expected to increase in the years to come. Given the increasingly digital-first orientation of the under-30 population, why is the security community experiencing this crisis of candidates, and more importantly, how can we close the gap?



Disaster recovery (DR) is an essential part of any business protocol, but each year we see numerous organizations fall short. By following these eight basic principles, Rod Harrison says that businesses can ensure that they have a complete DR plan that will provide full business continuity...

Make a thorough DR plan and keep it updated

Although it sounds like the obvious starting point, making a DR plan and keeping it updated is critical. While most businesses have some form of a plan in place, many fail to keep it regularly updated – leaving them vulnerable should a disaster strike.

Businesses should also work out how quickly they must recover, as this will influence the overall strategy. Different data sets will likely need to be recovered more quickly than others. Although everyone’s ideal objective is to recover immediately with zero downtime, the reality is that the more quickly you need to recover the more expensive it will be to implement. Therefore, it’s important to balance the cost with other requirements. For some businesses, it could be that a few hours or days of downtime will have limited impact, but for others the need to regain operations within a matter of seconds or even micro-seconds is critical.



Just as spreadsheets and personal computers created a job boom in the '70s, so too will artificial intelligence spur security analysts' ability to defend against advanced threats.

Teaching a machine to think like a human is the promise of artificial intelligence (AI). Using that narrow definition, it naturally follows that AI's future could ultimately include the idling of countless millions of workers who are gainfully employed today.

These concerns about job loss are logical and unavoidable, but in my opinion, they are as unfounded as they are provocative. While someday in the distant future AI systems may start to approach the holy grail of emulating the thought process and analytical capabilities of a human, today's capabilities put AI squarely in the category of a beneficial, time-saving tool rather than a human replacement.

Beginning with the Industrial Revolution and continuing into modern times, machinery has replaced workers. Automated looms disrupted the textile industry, and mass production disrupted the automobile industry. Desktop computing and word processing cut short many stenographer's careers, and other tools such as email and voicemail have imperiled letter carriers and administrative assistants.




  • We argue that existing business continuity approaches have eclipsed issues related to business models.
  • We propose that a shift is needed in business continuity approaches from value preservation to value creation.
  • We respond to calls to make business continuity more holistic and strategic.
  • We chart new and novel areas collaboration between two important areas of information management – business continuity and business models.


Company business models are vulnerable to various contingencies in the business environment that may unexpectedly render their business logic ineffective. In particular, technological advancements, such as the Internet of things, big data, sharing economy and crowdsourcing, have enabled new forms of business models that can effectively and abruptly make traditional business models obsolete. By disrupting or even diminishing companies’ revenue streams, environmental contingencies may present a significant threat to business continuity (BC). Evaluating the resilience of business models against these contingencies should therefore be a core area of BC. However, existing BC approaches tend to focus on the continuity of the resources and processes through which a particular business model is accomplished in practice but omit the business model itself. We argue that in order for BC approaches to become holistic and strategic, business models need to become a part of the BC considerations, entailing an expansion of the scope of BC from value preservation to value creation. We propose an approach of Strategic Business Continuity Management, which consists of two parts: (1) sustaining the continuity of the company business model (value preservation) and (2) evaluating and modifying the business model (value creation). We illustrate conceptually the value creation part with an example drawn from the sharing economy.



When creating security metrics, it's critical that test methodologies cover multiple scenarios to ensure that devices perform as expected in all environments.

Networks are a complex collection of components defined by many different standards. These standards help solve network problems ranging from security to performance and usability.

An open standard is a publicly available standard that can be consumed in a variety of ways for deploying a secure solution for a network. Readers of open security standards use them to understand how a technology might be useful to solve security on the network. Implementers of open standards can create solutions to address documented security issues. Network operators read standards to understand how the different implementations work together to make a complete security solution.

These network solutions often come from different sources, which leads to the creation of a variety of testing procedures and methodologies to ensure that network components support all the security and performance requirements of the network users. Since the majority of standards are also open, it would make sense that the methods for testing are also open. But often this isn't the case, and I think it should be.



We are fortunate in the UK that major incidents such as earthquakes, wildfires, flooding or terrorist attacks are rare. Yet when they do occur, we often find ourselves ill-prepared for the trials they present. In countries which regularly deal with these catastrophes, a disaster recovery plan is a standard part of a business plan. However, this is not always the case for organisations in the UK.

To give an example of what can happen, we can look to the Holborn fire in 2015. It’s the perfect example of how an event out of your control can cause significant disruption to your business. In the case of the Holborn fire, an electrical fault caused damage to a major gas main, resulting in an underground blaze that lasted for 36 hours… in the middle of London. It wasn’t until six whole days later that power in the area was finally restored.

Can you imagine the impact of one day where you’re unable to access any emails, files or client details? Here we’re talking about nearly a whole week! Many businesses who suffer a major disaster never fully recover – losing orders, contracts, key employees. Some even go out of business entirely.



The aftermath of a global corporate scandal is a very messy affair. Firstly, there’s the breaking news, then the media frenzy, the plummeting share price, the evaporating confidence, the damage-limitation exercises and finally the grovelling executives. We live in a super-charged, hyper-connected environment, answerable to the 24-hour “churnalism” cycle and social media chatterati. Boeing, Uber, Nissan, Huawei, Airbus or Purdue Pharma, to name but a recent few, have all had to step up like Winston Churchill to their darkest hour. “Crisis management can be like dealing with an explosion,” explains Jo Willaert, president of the Federation of European Risk Management Associations.

Be quick, honest, open and, in such circumstances, be compassionate in communications, these are the key principles of crisis management

And with any explosion, corporate or otherwise, everyone ducks away from the line of fire for fear of getting hit. Damage limitation can trump open communication. Slow and myopic group-think can stymy a crystal clear, crisis management plan because the stakes can be excruciatingly high and the fallout unthinkable. No one really wants to spark the next Lehman or Enron crisis. It would be career suicide.



Three steps you can take, based on Department of Homeland Security priorities

At the 2019 RSA Conference earlier this year, Chris Krebs, director of the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA), outlined several key priorities the agency is focused on for protecting US critical infrastructure. The US government is at the forefront when it comes to cybersecurity trends, so being aware of its focus can help private sector organizations improve cyber situational awareness and reduce risk.

Protecting Networks and Data from Nation-State Actors
CISA watches the usual suspects: Russia, China, Iran, and North Korea. Its key focus here is supply chain risk and minimizing the government's attack surface by keeping what it views to be risky vendors' equipment and applications out of US critical infrastructure networks.

In 2018, the US government banned technology from Russia-based Kaspersky Labs. With a heavy focus on China and 5G, it is now heavily focused on Huawei. Overall, the government is concerned that technology equipment from perceived risky foreign vendors could be used for malicious purposes.

Another area of focus is foreign VPN applications and, specifically, China-based applications from Dolphin, Opera, and Yandex.



With the explosion of social media today, can you actually get a rendering of what is being said about you, your organization, etc. so as to enable you to protect your reputation?  Or, is it just too much and too overwhelming due to the velocity of information that exists today? Geary Sikich explores…

Reputation risk management

Can you apply risk management techniques to reputation management?  The simple answer is, yes.  You can do a risk assessment – identify the risk(s), assess the probability, impact and prioritize resources against the risk(s) that have been identified.  You end up with a version of a ‘Heat Map’. Useful, perhaps.

A framework needs to be established that allows you to ‘horizon scan’ for the tidbits of information that are out there that may impact your reputation – either positively or negatively. This is much the same as setting up an intelligence gathering network as depicted below.



New ISSA/ESG survey underscores increasing pressures and security fallout of a strapped security team

Most cybersecurity professionals are struggling with heavier workloads and insufficient time to properly master and deploy all of their security tools' features, as well as hone their own skills, according to a new report.

The third annual Enterprise Strategy Group (ESG) and Information Systems Security Association (ISSA) International report on the state of cybersecurity professionals worldwide says nearly three-quarters of organizations are dealing with the fallout of the industry's skills gap. In the past two years, nearly half of the organizations surveyed suffered at least one damaging security incident in which a critical system was compromised, according to the report.

More than 65% of security pros say their current job demands typically impede their ability to develop and advance their skills, and 47% say they can't fully learn and use some security technologies to their "full potential."

"Cybersecurity professionals don't have the luxury of time to improve their skills and manage their careers," says Jon Oltsik, senior principal analyst and fellow at ESG and author of the report. That's a dangerous trend given the increasing demands of more IT devices, applications, and cloud migration without advancing security with these IT moves, he notes.



When people think of hurricanes, most people think of wind. The Saffir-Simpson hurricane scale is used to measure the intensity of a hurricane from Category 1 to 5, and that measurement is wind.

But a majority of deaths that result from hurricanes occur from flooding — water surge — and this happens during inland storms as well as on the coast. That has prompted some urging that the public has to be educated about the dangers of water during a hurricane and not just the wind speed.

During the 2017 and 2018 hurricane seasons, 90 percent of the deaths resulting to hurricanes were water related and 49 percent of those resulted from storm surge. More than half of the deaths from water involved an automobile. People often try to drive through water but hydroplane, or worse, float away. Twenty-four inches of water can float an SUV. Six inches of flowing water can knock a man off his feet.



Adaptive Business Continuity (Adaptive BC) is an alternative approach to business continuity planning, ‘based on the belief that the practices of traditional BC planning have become increasingly ineffectual’. In this article, Jean Rowe challenges the Adaptive BC approach.

We all can appreciate the intent to innovate, but innovation, in the end, must meet the needs of the consumer.  With this in mind, the Adaptive BC approach (The Adaptive BC Manifesto 2016), uses ‘innovation’ as a key message.

However, I believe that, upon reflection, the Adaptive BC approach can be viewed as the business continuity industry’s version of The Emperor’s New Clothes.

The Emperor’s New Clothes is “a short tale by Hans Christian Andersen, about two weavers who promise an emperor a new suit of clothes that they say is invisible to those who are unfit for their positions, stupid, or incompetent.”  As professional practitioners, we need to dispel the myth that using the Adaptive BC approach is, metaphorically speaking, draping the Emperor (i.e. top management) in some finely stitched ‘innovative’ business continuity designer clothes that only those competent enough can see the beauty of the design.   



As in many areas of business continuity and life, myths abound. Crisis management has them as well. In today’s post, we’ll look at five of the most pervasive.

Crisis management (CM) planning is an area where most companies may believe (or hope) they are in great shape at the same time as they should have doubts about their plans and hope they are never put to the test.

These myths have three things in common: 1) Believing in them makes people feel like they are off the hook, 2) they aren’t true, and 3) they are an obstacle to the company’s truly becoming prepared to deal with a crisis.

Here are five of the myths we encounter most frequently when out in the field:



Implementing an environmental management system (EMS) based on ISO 14001 might seem like a big task, but that doesn’t mean it is just for the bigger players in the market. Breaking it down into phases is the key. A newly revised guidance document just published helps businesses of all shapes and sizes put an EMS in place in the way that suits them – and reap the benefits every step of the way.

The environment is changing rapidly and businesses need to keep on top of what this means for them in order to survive – and thrive. An environmental management system (EMS) based on ISO 14001 helps organizations effectively manage the risks and capitalize on the opportunities that our changing world brings. Implementing an EMS provides a number of benefits such as more efficient use of natural resources and energy, enhanced compliance with legal requirements and better relations with customers.

Improving environmental performance is made easier with formal systems in place. However, small and medium-sized enterprises (SMEs) often find EMS implementation difficult due to fewer staff and resources.



(TNS) - Authorities said Friday morning at least five people have died in the Camp Fire that rapidly engulfed the town of Paradise near Chico, and more fatalities are expected to be discovered as the flames rage out of control and smoke blankets the region in an orange haze.

Two other wildfires meanwhile continued to scorch their way toward the Pacific Ocean in Ventura County, prompting evacuations of Malibu, a Cal State University campus and a naval base.

“The magnitude of the destruction that we are seeing is really, again, unbelievable and heartbreaking, and our, our hearts go out to everybody who has been affected by this and impacted,” Mark Ghilarducci, director of the Governor’s Office of Emergency Services, said in a Friday morning news conference live-streamed on Facebook.

Butte County Sheriff-Coroner Kory L. Honea reported Friday that five people have been found dead in the area of Edgewood Lane in Paradise. Their bodies were found in vehicles that the sheriff said “were overcome by the Camp Fire,” and were too badly burned to identify yet.



In fact, FIN7's activities only appear to have broadened, according to a new report

The arrests last August of three key members of the prolific FIN7 cyber threat group appear to have done little to stop its malicious activities so far.

In fact, telemetry from multiple recent campaigns suggests that the group's influence may actually have expanded over the last several months, Kaspersky Lab said in a report Wednesday. According to the security vendor, it has observed other groups using FIN7's tactics, techniques, and procedures [TTPs] in different campaigns, which suggests a possible collaboration among them.

"Usually, groups disappear from the radar for a time after arrests or public announcements about their activity," says Yury Namestnikov, a security researcher at Kaspersky Lab. "But this time, we see that they haven't stopped but are broadening their attacks and invested in the development of a toolkit," he says.



(TNS) — People were evacuated from their homes and schools were closed or delayed Wednesday after Kansas was hit with back-to-back thunderstorms.

The Kansas Turnpike was also closed south of Wellington to the Oklahoma border Wednesday morning.

Emergency management officials began evacuating people from an area about 5 miles west of Manhattan about 5 a.m., according to a report by the Associated Press.

Evacuations started in the Wichita area early Wednesday. The Weather Channel reported on its Twitter account that evacuations were ongoing in parts of Peabody and Wellington before 6 a.m. Peabody is in Marion County, about 40 miles north of Wichita. Wellington is in Sumner County, about 35 miles south of Wichita.



Kaplan & Walker’s Jeff Kaplan discusses the Department of Justice’s recent updates to its guidelines for evaluating the effectiveness of corporate compliance programs in the context of an investigation.

Editor’s note. Later this month CCI will publish the second and expanded edition of Jeff Kaplan’s popular e-book Compliance & Ethics Risk Assessment: Concepts, Methods and New Directions. Today’s post is excerpted from that volume.

When the original Federal Sentencing Guidelines for Organizations (“the Sentencing Guidelines”) were issued in 1991, there was no mention in them of risk assessment as part of compliance programs. It was not until the Sentencing Guidelines were amended in 2004 that this striking omission was remedied. But even then, risk assessment had not fully “arrived,” as some of the early compliance program requirements in FCPA settlements failed to include a risk assessment component.

Today, of course, risk assessment is front and center in governmental compliance program expectations. This is evident in the Justice Department’s recently published guidance Evaluation of Corporate Compliance Programs (“the Evaluation”).

This post reviews the Evaluation’s discussion of risk assessment. It also offers some practice pointers for meeting those expectations.



Effective leaders understand that boards are comprised of people with different skills and areas of expertise – often without the acumen to understand the details of security and risk the way a security or risk professional does. Lockpath’s Sam Abadir offers guidance on bridging that gap.

Communicating risk posture and assessments to the highest levels of an organization is a demanding and increasingly pivotal responsibility in businesses that rely on information technology. In a world where new threat vector and information risks proliferate, every CISO must be skilled in communicating the value of IT security to the business. By presenting this connection to the board, information chiefs show the role risk plays in the business and how information risk plays a role in fulfilling overall corporate objectives.

The risk management and governance work performed by CIOs, CROs, CISOs and their teams is central to the security of enterprise assets, data, supply chains, services and customers. It’s not just about checking boxes on compliance and audit preparation. When governance, risk management and compliance (GRC) programs are properly implemented, they strengthen and protect every facet of the enterprise. Managing security, IT and corporate policies becomes more integrated and efficient, closing gaps created by silos of data, systems and functions.



The same disruptive technologies that are changing our lives and revolutionizing virtually every sector of the economy can be used to create a more sustainable world. By setting the standards that frame these initiatives, ISO/TC 207 helps scale solutions to our most urgent environmental challenges. 

Just a decade ago, the term “green business strategy” evoked visions of fringe environmentalism and a high cost for minimal good. Recently, however, a new common wisdom has emerged that promises the ultimate reconciliation of environmental and economic concerns.

This new vision sounds great, yet is it realistic? ISOfocus sits down with Sheila Leggett, who began her term in 2018 as Chair of ISO technical committee ISO/TC 207, Environmental management, building on a distinguished career as a biologist, ecologist, industry consultant and environmental legislator. Having served on Canada’s Natural Resources Conservation Board and, later, the National Energy Board, Leggett’s experience is broad and her knowledge detailed.

The idea that a renewed interest in environmental management will result in a more sustainable world has widespread appeal. It is not surprising that ISO/TC 207 standards are so much in demand. Their standards portfolio, after all, tries to spur innovation and create business opportunities – for the good of all. Here, Leggett gives the lowdown on environmental management, and how a strategy good for the world can also be good for your bottom line.



Wednesday, 08 May 2019 15:16

Beyond Technology

(TNS) — Imagine being a plate in a dishwasher.

That's what it's like flying into a hurricane, says Commander Nate Kahn, who pilots a National Oceanic and Atmospheric Administration Lockheed WP-3D Orion "Hurricane Hunter" into tropical storms for a living.

"There's a lot of water, a lot of wind," Kahn, 37, said. "Every flight is different."

James Roles, 58, an electronics engineer with the crew, said, "You're kind of like a peanut in a can. It shakes you pretty good."

Kahn, Roles and other crew members were at Quonset State Airport on Monday to help raise awareness about the danger of tropical cyclones and the need to prepare for them. The Rhode Island visit, which also included a USAF Reserve WC-130J Hurricane Hunter, is part of a five-state NOAA tour during National Hurricane Preparedness Week. Hurricane season starts June 1 and continues until Nov. 30.

Students from several area schools toured the planes, which were then opened to the general public.



Behavioral biometrics is a building block to be used in conjunction with other security measures, but it shows promise.

The quest for frictionless yet secure authentication has been the central driver of innovation in identity and access management (IAM) systems for a long time. But today — as new technologies become available and passwords continue to fall by the wayside — novel forms of authentication are coming faster than ever.

For instance, many industries have grown comfortable using device-based biometrics such as fingerprint, voice, and face recognition, and some major brands — including Bank of America, Cigna, Intuit, and T-Mobile — have even begun to allow "biometric gesture"-based authentication on mobile phones, tablets, and PCs. A unique swipe or similar gesture is used to securely access online services and eliminate the need for passwords.

The global market for biometrics overall is growing nearly 20% annually and is on track to reach more than $10 billion by 2022. Amid this burgeoning market, "behavioral biometrics" has emerged as a new segment. This new area uses various sensors on your phone to create a behavioral signature. Behavioral biometrics on smartphones may prove to be a big driver of biometrics market growth. Against this backdrop, the evolution of behavioral biometrics could have a major impact on the whole IAM industry. 



Tuesday, 07 May 2019 15:52

Better Behavior, Better Biometrics?

It’s the present neatly wrapped with a bow, the car in the showroom that glistens with a new coat of wax, the lights and sounds of Las Vegas or a million other examples that intrinsically draw our attention and turn our wants into perceived needs. For organizations, these shiny new toys have been replaced with an increasing array of technological advancements. Even without fully assessing whether they are a fit for their business model and customer base, technological financial expenditures have skyrocketed over the past few years, with no end in sight.

And internal audit is caught squarely in the middle.

As the auditor’s role continues to shift away from one of strict independence, organizational leaders are demanding more and more input from this group. No longer tasked with simply overseeing evaluations and recommending improvement for the effectiveness of risk management, control and governance processes, internal auditors are now being tasked with playing a more active role in guiding executive decision-making – especially regarding technology transformation.



FirstNet, a nationwide communications platform for public safety agencies, has reached a new milestone: It now has tallied more than 600,000 device connections from the more than 7,250 agencies that use it.

These usage numbers are significant, because FirstNet has set out to be the first truly nationwide network that connects public safety agencies. The more agencies that sign up and use FirstNet, the stronger it becomes. This new milestone marks continued growth for a platform that managed to register all 50 states just shy of the deadline it set for doing so in 2017.

FirstNet — a public-private collaboration between the First Responder Network Authority (FirstNet Authority) and AT&T — continues to add more subscribers. In a press release announcing the new milestone, the Authority stated it has recently added the American Medical Response, the Chicago Police Department, the Seattle Fire Department, and the U.S. Coast Guard, among others. The announcement also reported that FirstNet consistently performs more than 25 percent faster than any commercial network, according to an analysis done by Ookla of Speedtest Intelligence that looked at average data download speeds in the first quarter of 2019.



Scammers are figuring out unique ways of abusing cloud services to make their attacks look more genuine, Netskope says.

Cybercriminals have begun abusing legitimate cloud services in new ways to try and sneak attacks past security controls and make their scams appear more convincing to intended victims.

Netskope on Monday said its researchers had observed a recent trend among attackers to send phishing emails and SMS messages with links to malicious sites and content hosted on cloud services such as Amazon Web Services (AWS), Microsoft Azure, Alibaba Cloud, and Google Docs.

The security vendor says it has seen the technique being used to try and direct users to scam pharmacy sites, dating sites, and tech support sites, designed to steal personal information or for blackmailing victims. In other instances, attackers are abusing Google Docs to create and share presentations that contain malicious links.



Tuesday, 07 May 2019 15:45

Attackers Add a New Spin to Old Scams

(TNS) — As Florida enters hurricane season starting June 1, the public needs to prepare for hazardous weather and ensure disaster supply kits are complete, Sarasota County officials urged in a news release. Knowing the risk, getting prepared and staying informed are just a few steps people can take to get ready for hurricane season.

Area hurricane evacuation maps have been updated, officials noted. Residents are encouraged to check the updated maps online to know their evacuation level, previously known as a "zone."

According to Sarasota County Emergency Management officials, just because you can't see water from your home doesn't mean you're not at risk for storm surge. The updated hurricane evacuation levels and storm surge maps are available online by visiting scgov.net/beprepared.



When a critical event happens, preparedness is key.

The ever-growing threat of risks, whether natural disasters or man-made events, has put safety and resiliency top of mind for today’s organizations. Companies of all sizes have implemented mass notification systems to send alerts to employees for situations such as severe weather updates, IT alerts or organizational announcements.

Having a notification system is important, being prepared to use the system at a moment’s notice is critical. That’s why when a crisis strikes, it is better to have prewritten message scenarios ready to send rather than fumbling with the message content. However, developing prewritten messages for all kinds of events may seem daunting. Where do you start? Better yet, what do you say?

To better aide notification system admins and users, OnSolve has created the white paper, Your Alert Arsenal for Customizing and Distributing Messages during Critical Events. This critical notification resource contains over 100 pre-written example alerts for emergency and routine events. It covers a range of events in both emergency and non-emergency domains—from natural disaster alerts to customer communication notifications.



A completely trusted stack lets the enterprise be confident that apps and data are treated and protected wherever they are.

With great power comes great responsibility. Just ask Spider-Man — or a 20-something system administrator running a multimillion-dollar IT environment. Enterprise IT infrastructures today are incredibly powerful tools. Highly dynamic and dangerously efficient, they enable what used to take weeks to now be accomplished — or destroyed — with a couple of mouse clicks.

In the hands of an attacker, abuse of this power can dent a company's profits, reputation, brand — even threaten its survival. But even good actors with good intentions can make mistakes, with calamitous results. Bottom line: The combination of great power with human fallibility is a recipe for disaster. So, what's an IT organization to do?

Answer: Trust the stack, not the people.



Monday, 06 May 2019 16:18

Trust the Stack, Not the People

Many business continuity professionals think of the cloud as a magical realm where nothing bad can happen. The reality is that things go wrong in the cloud all the time and as a result we must be sure to perform our due diligence in setting up our cloud-based IT/Disaster Recovery solutions.

In today’s post we’ll look at some of the common misconceptions people have about the cloud.

We’ll also talk about some things you can do to make sure this excellent “new” invention called the cloud doesn’t disappoint you when you need it most.



With hurricane season right around the corner, it’s never too early for businesses to start preparing for potential impact. The first line of defense in protecting your people and assets is understanding how a hurricane’s category level can help your business prepare for the worst.

But first, a quick history lesson:

In the 1970s, Miami engineer Herbert Saffir teamed up with Robert Simpson, the director of the National Hurricane Center. Their mission: develop a simple scale to measure hurricane intensity and the potential damage storms of varying strength could cause residential and business structures.

The result is the Saffir-Simpson Hurricane Wind Scale, which assigns a category level to storms based on their sustained wind speeds. The scale ranks every hurricane from 1-5, with 5 being the most intense—a storm of this magnitude will leave behind catastrophic damage in its wake.



Passwords are simply too vulnerable. On the dark web the underground market for passwords and other identity details is thriving. Every month at least one major hack or data leak takes place in which millions of records, including passwords, are exposed or stolen.

If a hacker gets a password and email address they simply apply the information to online platforms such as Amazon, ebay, Facebook and others, until they get a hit. It’s common practice, known as credential stuffing. According to some, many people will have upwards of 200 online accounts within a few years. How do you remember passwords for so many accounts? The savvy use password managers, however many still use the same password across all their accounts despite warnings. 

Every year BullGuard notes that surveys of the most common passwords reveal that '123456', 'password', '123456789' and 'qwerty' still make the top 10. Cyber criminals love it. They have great success using simple keyboard patterns to break into accounts online because they know so many people are using these easy-to-remember combinations.

Because of their inherent vulnerability should we be seeing the slow decline of the password? If so, what will replace it and what will we be using five years from now? This article provides some insight by looking at how today’s developments are evolving from their password roots and how they might shape the future.



In the last few years, biometric technologies from fingerprint to facial recognition are increasingly being leveraged by consumers for a wide range of use cases, ranging from payments to checking luggage at an airport or boarding a plane. While these technologies often simplify the user authentication experience, they also introduce new privacy challenges around the collection and storage of biometric data.

In the US, state regulators have reacted to these growing concerns around biometric data by enacting or proposing legislation. Illinois was the first state to enact such a law in 2008, the Biometric Information Privacy Act (BIPA). BIPA regulates how private organizations can collect, use, and store biometric data. BIPA also enabled individuals to sue individual organizations for damages based on misuse of biometric data.



With GDPR and the California Consumer Privacy Act dominating the data privacy conversation, Baker Tilly’s David Ross discusses the myriad benefits of maintaining compliance.

Recently, we saw Google fined $57 million by France in the punishments imposed for violations of the sweeping General Data Protection Regulation (GDPR) legislation passed by the European Union. Fined for not properly disclosing or alerting consumers on how their data would be used, Google’s practices ran afoul of the new data privacy laws enacted in May 2018.

Consumers and corporations alike face unfortunate repercussions when cybersecurity precautions aren’t taken seriously. Gloomy statistics and stories of well-known corporations losing customer and vendor personal information to large-scale data breaches fill the news on a near daily basis. The frequency of data breaches has increased to an unprecedented rate, and the cost continues to rise each year. A study by the Ponemon Institute reports the average cost of a data breach is up 6.4 percent since 2017 to a whopping $3.86 million.

While there is significant press surrounding the fines organizations must pay for breaches and violations, the other less apparent and often difficult-to-quantify costs can be much greater, farther reaching and longer lasting. These may include reputational damage, loss of stock value, loss of current and future customers, class action lawsuits and remediation expenses from breaches such as notification costs or credit report monitoring for affected customers.



Exploits give attackers a way to create havoc in business-critical SAP ERP, CRM, SCM, and other environments, Onapsis says.

Exploits targeting a couple of long-known misconfiguration issues in SAP environments have become publicly available, putting close to 1 million systems running the company's software at risk of major compromise.

Risks include attackers being able to view, modify, or permanently delete business-critical data or taking SAP systems offline, according to application security vendor Onapsis.

The exploits, which Onapsis has collectively labeled 10KBLAZE, were publicly released April 23. They affect a wide range of SAP products, including SAP Business Suite, SAP S/4 HANA, SAP ERP, SAP CRM, and SAP Process Integration/Exchange Infrastructure.



And now that I have your attention… there really is a link between the two incongruous topics in the headline. Archive360’s Bill Tolson explains.

Perhaps you remember sitting through a class in high school billed as “sex education,” yet finding it dealt so indirectly with the topic that it was difficult, if not impossible, to discern the pertinent details that would help you understand what you really needed to know in this area. When faced with a real-life situation, many of us thus stumbled in blindly.

If you know anything about the General Data Protection Regulation (GDPR), then you’ll see the close analogy here. While the regulation has been in effect for almost a year now, many companies are still failing to grasp and act on the necessary details to stay compliant — the equivalent of closing their eyes and hoping for the best.



New study shows SMBs face greater security exposure, but large companies still support vulnerable systems as well.

Organizations with high-value external hosts are three times more likely to have severe security exposure to vulnerabilities such as outdated Windows software on their off-premise systems versus their on-premise ones.

While external hosts at SMBs face greater exposure than larger companies, as company revenues grow so do the number of hosts and security issues affecting them, according to a new study published yesterday by the Cyentia Institute and researched by RiskRecon. The study analyzed data from 18,000 organizations and more than 5 million hosts located in more than 200 countries.

The study, Internet Risk Surface Report: Exposure in a Hyper-Connected World, identified more than 32 million security vulnerabilities, such as old Magecart ecommerce software and systems running outdated versions of OpenSSL that are vulnerable to exploits such as DROWN and Shellshock.

Wade Baker, founder of the Cyentia Institute, says the results have to be carefully analyzed. For example, 4.6% of companies with fewer than 10 employees had high or critical exposure to security vulnerabilities, versus 1.8% of companies with more than 100,000 employees. So while the 1.8% number sounds good percentage-wise, that's still many more hosts exposed.



Thursday, 02 May 2019 14:18

Study Exposes Breadth of Cyber Risk

(TNS) — Unlicensed handgun owners would be allowed to carry their weapons — openly or concealed — in public for up to a week in any area where a local, state or federal disaster is declared, under a bill that has been overwhelmingly approved by the Texas House, 102 to 29.

House Bill 1177 by Rep. Dade Phelan, R-Beaumont, now awaits its first hearing in the Texas Senate. Phelan said he wrote the bill so gun owners don’t have to leave their firearms behind when evacuating their homes. Existing laws allow gun owners to store them in their vehicles, with some conditions.

“I don’t want someone to feel like they have to leave their firearms back in an unsecured home for a week or longer, and we all know how looting occurs in storms,” Phelan said. “Entire neighborhoods are empty and these people can just go shopping, and one of the things they’re looking for is firearms.”

Opponents say Phelan’s bill could make a bad situation worse by adding firearms to an already volatile situation.



More than six months have passed since I wrote Forrester’s predictions 2019 report for distributed ledger technology (DLT, AKA blockchain). In the blockchain world, that’s ages ago.

As I keep being asked how those predictions are shaping up, and having just attended two excellent events in New York, now’s a good time to take stock. So how did we do?

Terminology shift from blockchain to DLT: I was mostly wrong but also a little bit right. What we’re seeing today is neatly reflected in the titles of the two conferences I referred to above: the EY Global Blockchain Summit and IMN’s Synchronize 2019: DLT And Crypto For Financial Institutions. In other words, in the financial services sector, the distributed ledger/DLT terminology has become predominant; there are even firms where the term “blockchain” is banned from the vocabulary altogether. Outside of this industry, though, it’s a different picture: Say “DLT” or “distributed ledger,” and you get blank stares; say “blockchain,” and eyes light up. For the same reason, many startups continue leading with “blockchain” in their marketing, even if their software lacks some of the characteristics typically associated with that descriptor. One said to me: “Blockchain is a recognized category; DLT isn’t.”



According to hurricane research scientists at Colorado State University, early predictions for the 2019 hurricane season show a slightly below average activity level.

While this could be good news, we can’t forget about the destruction caused in the past several years from hurricanes. In 2018, fifteen named storms developed with Hurricanes Michael and Florence making landfall and causing crises for both Florida and North Caroline. The 2017 season cost more than $282 billion and caused up to 4,770 fatalities.  Whether we see two named storms or ten, preparation is your greatest ally against potential devastation.  Start by using these automated message templates for your organization’s mass notification system.

Using Hurricane Notification Message Templates

When using message templates, there are a few basic guidelines to follow. Start by keeping the message length to a minimum. This ensures recipients can get the most information in the least amount of time. In addition, SMS messages cannot exceed 918 characters; longer messages are broken up into multiple messages that may create confusion.

By creating message templates prior to severe weather, you can generate detailed and informative alerts for every step in your emergency plan. Then in the wake of a hurricane, these messages are ready to be sent to the right audiences. Recipients receive only those messages that apply to them, which helps to eliminate confusion during a stressful time.



Business Continuity Awareness Week 2019 is May 13‑17

This global event is a time to consider business continuity and the value an effective continuity management program can have for your organization.

An emergency notification system is a crucial tool in any business continuity plan. Every day, events like the following happen with no warning:

  • Hurricanes, tornadoes, and other natural disasters
  • Active shooter
  • Urban wildfire
  • Power outages
  • Cybercrime
  • Disease outbreaks
  • Workplace violence

One of the most frequent consequences of these events is limited or impaired communication, making it difficult to relay critical messages regarding safety and disaster response. Emergency notification systems have proven to be a vital tool for today’s organizations.



As corporate boards gather for annual shareholder meetings, the issues in the spotlight are defined by forces driving both business growth and risk. BDO’s Amy Rojik offers suggestions for how boards can be prepared to communicate with stakeholders this year.

For corporate boards, spring marks the arrival of annual shareholder meeting season. Every year, shareholders gather for board meetings armed with questions and concerns that, if not sufficiently addressed, may hamper their confidence in a business’s ability to manage risk and sustain long-term value creation.

In 2019, the list of issues on boards’ radars are defined by the forces driving both business growth and risk, in equal measure. This year’s key areas of shareholder concern can be grouped into four categories: digital transformation and data protection, people and culture, market movement and regulation and reporting. Here are some suggestions for how boards can address them.

Digital Transformation & Data Protection

With organizations facing increasing pressure to streamline and optimize every aspect of their business, digital transformation is at the crux of business innovation. As a result, it is nearly impossible to walk into a boardroom without hearing the phrase mentioned. And for good reason — having a digital transformation strategy is no longer optional; it is necessary for survival in today’s digital economy. Corporate boards should expect shareholders to question how much is being spent on digital transformation, who is leading the charge on strategy, what the return on investment is and how the organization compares to its peers. In communicating a digital strategy to stakeholders, linking it to clear key performance indicators (KPIs) and business objectives is critical.



When we walk into our homes, we can ask our voice assistants to turn the lights on, use our faces to unlock doors and monitor our home cameras on our phones. When we travel, the planes we take now include connected blockchain-based parts that regularly alert crews for vital maintenance. Brought on by the Fourth Industrial Revolution (4IR), smart, connected technologies are helping make life easier, faster and more convenient, because they can significantly boost the intelligence and reach of one digital technology alone. However, blended artificial intelligence (AI), internet of things (IoT), blockchain and other 4IR technologies also bring infinite entry points for risk.

Imagine, for instance, how many companies are using AI for analytics that improve with use. But data errors, or bias in software or models, can misinform decisions and bring unforeseen accidents. AI-related risks have ranged from public pushback on the use of AI-based surveillance cameras, to software glitches that led to self-driving car crashes. Add to this list evolving regulations in areas like data privacy, and missed risks can be costly. A 2018 report by the Ponemon Institute estimates noncompliance costs to be 2.7 times the cost of maintaining or meeting compliance requirements — up 45 percent since 2011.

While companies race to digitally transform themselves and realize the full potential of 4IR technologies, we should pause to consider how companies can best navigate the immense risk that these blended technologies bring.



Protiviti’s Jim DeLoach discusses one of the more pervasive issues falling within senior management’s and the board’s purview. Performance relates to virtually everything important: execution of the strategy, the customer experience, investor expectations, executive compensation and even senior management and the board itself. Accurately measuring it is critical.

Performance management is so integral to the functioning of executive management and to the oversight of the board of directors that it’s easy to forget that it, too, is a process. Like all processes, it can be effective or ineffective in delivering the desired value. Given the complexity of the global marketplace, the accelerating pace of disruptive change and ever-increasing stakeholder expectations, how should executive management direct and the board oversee the performance management process so that it is effective in driving execution of the strategy and incenting the desired behaviors across the organization?

As the ultimate champion for effective corporate governance, the board engages management with emphasis on four broad themes: strategy, policy, execution and transparency. Effective performance management touches each of these themes by focusing outwardly as well as inwardly and looking to the future as well as to the present and past. The message is that, in today’s environment, the focus on performance must be anticipatory and proactive as well as reactive and interactive in focusing company resources on the pursuit of its performance goals.

Many organizations use some variation of a balanced scorecard that integrates financial and non-financial measures to communicate what’s important, focus and align processes and people with strategic objectives and monitor progress in executing the strategy. With that as a context, we are observing in the marketplace six important areas of emphasis for measuring performance:



Financial services firms saw upticks in credential leaks and credit card compromise as cybercriminals go where the money is.

More than one-quarter of all malware attacks target the financial services sector, which has seen dramatic spikes in credential theft, compromised credit cards, and malicious mobile apps as cybercriminals seek new ways to generate illicit profits.

It's hardly surprising to learn attackers want money; what researchers highlight in IntSights' "Banking & Financial Services Cyber Threat Landscape Report" is what they look for and how they obtain it. The first quarter of 2019 saw a 212% year-over-year spike in compromised credit cards, 129% surge in credential leaks, and 102% growth in malicious financial mobile apps.

Banks and other financial services organizations were targeted in 25.7% of all malware attacks last year – more than any of the other 27 industries tracked. Researchers point to two key events that largely shaped the modern financial services threat landscape: the shutdown of cybercriminal forum Altenen and "Collections #1-5," a major global data leak earlier this year.



Cavirin‘s Anupam Sahai discusses the factors that determine whether the CCPA impacts an organization, what the requirements are if so and what action you can take to prepare for it.

Just when you thought you had a handle on GDPR, businesses have a new legislation to worry about: the California Consumer Privacy Act (CCPA). The CCPA stipulates that California residents should have greater access to and control over personal information held by businesses. In particular, the law seems targeted to online social media firms (e.g., Facebook) that have been reckless with their users’ personal information over the past few years. With the number of data breaches to date, are we really that surprised that something like this is coming into effect?

CCPA will become effective on January 1, 2020, but will not be enforced until six months afterward. However, the new law enshrines a few fundamental rights for consumers to access the information that companies hold on them and to control what is collected, stored and shared within the previous 12-months. So, come July 1, 2020, if a company has collected personal information from January 1, 2019 onward, the consumer has the right to find out exactly what data a business has collected, they can opt out from the company selling their data and they have the right to ask for their data to be deleted – or, as the GDPR regulation puts it, the right to be forgotten. 



Strategic Overview

Disasters disrupt preexisting networks of demand and supply. Quickly reestablishing flows of water, food, pharmaceuticals, medical goods, fuel, and other crucial commodities is almost always in the immediate interest of survivors and longer-term recovery.

When there has been catastrophic damage to critical infrastructure, such as the electrical grid and telecommunications systems, there will be an urgent need to resume—and possibly redirect— preexisting flows of life-preserving resources. In the case of densely populated places, when survivors number in the hundreds of thousands, only preexisting sources of supply have enough volume and potential flow to fulfill demand.

During the disasters in Japan (2011) and Hurricane Maria in Puerto Rico (2017), sources of supply remained sufficient to fulfill survivor needs. But the loss of critical infrastructure, the surge in demand, and limited distribution capabilities (e.g., trucks, truckers, loading locations, and more) seriously complicated existing distribution capacity. If emergency managers can develop an understanding of fundamental network behaviors, they can help avoid unintentionally suppressing supply chain resilience, with the ultimate goal of ensuring emergency managers “do no harm” to surviving capacity.

Delayed and uneven delivery can prompt consumer uncertainty that increases demand and further challenges delivery capabilities. On the worst days, involving large populations of survivors, emergency management can actively facilitate the maximum possible flow of preexisting sources of supply: public water systems; commercial water/beverage bottlers; food, pharmaceutical, and medical goods distributors; fuel providers; and others. To do this effectively requires a level of network understanding and a set of relationships that must be cultivated prior to the extreme event. Ideally, key private and public stakeholders will conceive, test, and refine strategic concepts and operational preparedness through recurring workshops and tabletop exercises. When possible, mitigation measures will be pre-loaded. In this way, private-public and private-private relationships are reinforced through practical problem solving.

Contemporary supply chains share important functional characteristics, but risk and resilience are generally anchored in local-to-regional conditions. What best advances supply chain resilience in Miami will probably share strategic similarities with Seattle, but will be highly differentiated in terms of operations and who is involved.

In recent years the Department of Homeland Security (DHS) and the Federal Emergency Management Agency (FEMA) have engaged with state, local, tribal and territorial partners, private sector, civic sector, and the academic community in a series of innovative interactions to enhance supply chain resilience. This guide reflects the issues explored and the lessons (still being) learned from this process. The guide is designed to help emergency managers at every level think through the challenge and opportunity presented by supply chain resilience. Specific suggestions are made related to research, outreach, and action.



Tuesday, 30 April 2019 14:48

FEMA Supply Chain Resilience Guide

vpnMentor’s research team discovered a hack affecting 80 million American households.

Known hacktivists Noam Rotem and Ran Locar discovered an unprotected database impacting up to 65% of US households.

Hosted by a Microsoft cloud server, the 24 GB database includes the number of people living in each household with their full names, their marital status, income bracket, age, and more.



(TNS) - Twenty-three men and women from Cambria, Somerset and Bedford counties graduated on Friday after a week of training by the Laurel Highlands Region Police Crisis Intervention Team.

The program, held at Pennsylvania Highlands Community College in Richland Township, included classes on suicide prevention, mental illness, strategies to de-escalate situations, dealing with juveniles and specialty courts.

“It’s critical that we give them the training,” said Kevin Gaudlip, Richland Township police detective and event coordinator. “Many of these situations are suicidal people that we encounter. In this course, officers are given the skills to effectively communicate with these people to prevent suicide.”

Police officers, 911 dispatchers, corrections officers, EMS personnel, probation officers, crisis intervention teams and others participated.



A firm’s people play essential roles in all stages of IT transformation. For companies at the beginner level of maturity, employees must come together to connect the organization. Once the organization is united, it must adopt customer-centric principles to become adaptable and reach intermediate maturity. To reach an advanced maturity level, the organization must again rely on its people to transition from being adaptable to adaptive. At each of these maturity levels, a company’s talent, culture, and structure look slightly different. The key differences in these three areas between beginner, intermediate, and advanced firms undergoing IT transformations are as follows:



In previous articles, we discussed how communicable diseases and pandemics are (or are not) addressed in personal and commercialinsurance policies. Today, we’ll talk about pandemic catastrophe bonds.

The Ebola outbreak between 2014 and 2016 ultimately resulted in more than 28,000 cases and 11,000 deaths, most of them concentrated in the West African countries of Guinea, Liberia, and Sierra Leone.

The outbreak inspired the World Bank to develop a so-called “pandemic catastrophe bond,” an instrument designed to quickly provide financial support in the event of an outbreak. The World Bank reportedly estimated that if the West African countries affected by the Ebola outbreak had had quicker access to financial support, then only 10 percent of the total deaths would have occurred.

But wait, what are “catastrophe bonds” and what’s so special about a pandemic bond?



Monday, 29 April 2019 18:31


With a year of Europe's General Data Protection Regulation under our belt, what have we learned?

There is no denying the impact of the European Union General Data Protection Regulation (GDPR), which went into effect on May 25, 2018. We were all witness — or victim — to the flurry of updated privacy policy emails and cookie consent banners that descended upon us. It was such a zeitgeist moment that "we've updated our privacy policy" became a punchline.

Pragmatically, the GDPR will serve as a catalyst for a new wave of privacy regulations worldwide — as we have already seen with the California Consumer Privacy Act (CCPA) and an approaching wave of state-level regulation from Washington, Hawaii, Massachusetts, New Mexico, Rhode Island, and Maryland.

GDPR has been a boon for technology vendors and legal counsel: A PricewaterhouseCoopers survey indicates that GDPR budgets have topped $10 million for 40% of respondents. A majority of businesses are realizing that there are benefits to remediation beyond compliance, according to a survey by Deloitte. CSOs are happy to use privacy regulations as evidence in support of stronger data protection, CIOs can rethink the way they architect their data, and CMOs can build stronger bonds of trust with their customers.



Security is a top concern at all levels of the organization, but especially at the board level and C-suite. SoftwareONE’s Mike Fitzgerald champions a “security-first” mentality and discusses the implications of failing to meet industry standards and regulations.

Instances of lost intellectual property (IP) due to data breaches are gaining attention in the mainstream press and in board rooms across the globe. C-suite executives are taking note of these events; security and compliance are no longer just IT issues. They are very real and very urgent business issues. Breaches and noncompliance have a major impact on business. After all, in the U.S. alone, the average data breach could cost a company upward of $7.9 million.

Compliance concerns are receiving attention from existing c-suite executives and have caused enough of a stir to lead to the creation of new roles, such as the Chief Compliance Officer (CCO), who is tasked with understanding and managing the plethora of compliance requirements that organizations must address. The CCO and the Chief Information Security Officer (CISO) need to be aware of compliance requirements on the global level (think General Data Protection Regulation (GDPR)) and on the local level (Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX)), since most organizations store at least some of their data in the cloud. The fine for a breach or lapse in compliance with an industry standard or regulation like GDPR can equal as much as 4 percent of a company’s revenue; that is potentially enough to put a company out of business. This new compliance-driven market makes it imperative to have a security-first mentality when it comes to IT decisions and a thorough understanding of the greater business implications resulting from a lack of proper security practices.



More and more businesses are deploying applications, operations, and infrastructure to cloud environments – but many don't take the necessary steps to properly operate and secure it.

"It's not impossible to securely operate in a single-cloud or multicloud environment," says Robert LaMagna-Reiter, CISO at First National Technology Solutions (FNTS). But cloud deployment should be strategized with input from business and security executives. After all, the decision to operate in the cloud is largely driven by business trends and expectations.

One of these drivers is digital transformation. "There is a driving force, regardless of industry, to act faster, respond to customers quicker, improve internal and external user experience, and differentiate yourself from the competition," LaMagna-Reiter says. Flexibility is the biggest factor, he adds, as employees and consumers want access to robust solutions that can be updated quickly.



Monday, 29 April 2019 18:26

How to Build a Cloud Security Model

When Newman, Calif., police officer, Ronil Singh, was murdered in December 2018, a Blue Alert was issued to notify the public of the dangers of a killer on the loose and to help apprehend the suspect.

The Blue Alert, a brief message issued via FEMA’s Integrated Public Alert and Warning System (IPAWS), was issued by the California Highway Patrol (CHP) in the Fresno and Merced areas where the suspect was believed to be on the run. The embedded link in the alert that contained a flyer with added information on the suspect was clicked on by more than a million cellphones within 30 minutes.

Developed by OnSolve, Blue Alert is a new addition to IPAWS to provide law enforcement officials with the ability to alert the public of injury or death of a law enforcement official. It is administered in California by the CHP, which acts on information provided by the local agency seeking to send an alert.



There’s a pervasive myth out there that the marijuana industry is an unregulated Wild West populated by desperadoes and mountebanks out to score a quick buck.

But even a passing familiarity with how the industry operates in states with legal recreational and medical marijuana should be enough to dispel that myth. Marijuana operations are subject to extremely strict licensing requirements and regulatory oversight. Every player in the marijuana supply chain is tightly controlled – from cultivators to retail stores to, yes, the buyers themselves.

In fact, a recent analysis from workers compensation insurer Pinnacol Assurance suggests that the industry’s strict regulatory oversight may also be the reason why it’s a safe industry to work in.



What does the future hold? This year on 28 April, the World Day for Safety and Health at Work draws attention to the future of work and reminds us of the importance of ISO solutions in combating work-related injuries, diseases and fatalities worldwide.

Health and safety at work likely isn’t an issue that’s top of mind on a daily basis. Yet, for millions of workers across the globe, their jobs can put them in some extremely high-risk environments where valuing safety can mean the difference between life and death.

Organized by the International Labour Organization (ILO), the World Day for Safety and Health at Work aims to raise awareness of the importance of occupational health and safety and build a culture of prevention in the workplace. This year’s theme looks to the future for continuing these efforts through major changes such as technology, demographics, sustainable development, and changes in work organization.



In the wake of a reported ransomware attack on global manufacturing firm Aebi Schmidt, Peter Groucutt outlines the steps companies should take to prepare for such incidents. A clear cyber incident response plan and maintaining frequent communication are critical.

The details of the attack on Aebi Schmidt remain light at this stage, but early reports suggest it was severe, with systems for manufacturing operations left inaccessible. The manufacturing sector has recently seen a number of targeted ransomware attacks using a new breed of ransomware known as LockerGoga. Norwegian aluminium producer Norsk Hydro and French engineering firm Altran have been hit in Europe. In the US, chemicals company Hexion was also attacked. The reasoning for these targets is clear – paralysing the IT systems for these businesses has an immediate effect on their production output. That means significant losses, potentially millions of dollars per day. Unlike mass ransomware attacks that might net the attacker a few hundred pounds, the ransom is correspondingly higher.

If you are hit by a ransomware attack, you have two options. You can either recover the information from a previous backup or pay the ransom. However, even if you pay the ransom, there is no guarantee you will actually get your data back, so the only way to be fully protected is to have historic backup copies of your data. When recovering from ransomware, your aims are to minimise both data loss and IT downtime. Defensive and preventative strategies are essential but outright prevention of ransomware is impossible. It is therefore vital to plan for how the organization will act when compromised to reduce the impact of attacks. Having an effective cyber incident response plan in place is critical to your recovery.



Friday, 26 April 2019 14:59

Lessons from a Ransomware Attack

Sea level rise, and its perils, is often associated with the East Coast. But California communities along the coast that don’t prepare for what’s ahead could be inviting disasters of the magnitude not yet seen in the state.

A report by the United States Geological Survey Climate Impacts and Coastal Processes Team suggests that future sea level rise, in combination with major storms like the ones the state is experiencing now, could cause more damage than wildfires and earthquakes.

This is the first study that looks not just at sea level rise in California, but also sea level rise, along with a major storm to assess total risk to coastal communities.



Spam has given way to spear phishing, cryptojacking remains popular, and credential spraying is on the rise.

The time it takes to detect the average cyberattack has shortened, but  cyberattackers are now using more subtle techniques to avoid better defenses, a new study of real incident response engagements shows.

Victim organizations detected attacks in 14 days on average last year, down from 26 days in 2017. Yet, attackers seem to be adapting to evade the greater vigilance: Spam, while up slightly in 2018, continues to account for far less of e-mail volume than during every other year in the past decade, and techniques such as hard-to-detect cryptojacking and low-volume credential spraying are becoming more popular, according to Trustwave's newly published Global Security Report

Other stealth tactics—such as code obfuscation and "living off the land," where attackers use system tools for their malicious aims—are also coming into greater use, showing that attackers are changing their strategies to avoid detection, says Karl Sigler, threat intelligence manager at Trustwave's SpiderLabs. 



(TNS) — Teenagers and adults lined up in the Jerome High School gym, ready to receive medication, while police stood guard outside.

The exercise was part of a four-day simulation, organized by the South Central Public Health District and Jerome County Office of Emergency Management, to prepare for a potential anthrax or other bioterrorism attack. The exercise coincided with similar exercises in Idaho’s six other public health districts.

The South Central Public Health District holds large-scale simulations every few years, district director Melody Bowyer said, and smaller exercises annually.

“One of our very important missions for public health is to protect and prepare the community for a real health threat,” such as a disease outbreak, natural disaster or bioterrorism attack, Bowyer said.



For 74 minutes, traffic destined for Google and Cloudflare services was routed through Russia and into the largest system of censorship in the world, China's Great Firewall.

On November 12, 2018, a small ISP in Nigeria made a mistake while updating its network infrastructure that highlights a critical flaw in the fabric of the Internet. The mistake effectively brought down Google — one of the largest tech companies in the world — for 74 minutes.

To understand what happened, we need to cover the basics of how Internet routing works. When I type, for example, HypotheticalDomain.com into my browser and hit enter, my computer creates a web request and sends it to Hypothtetical.Domain.com servers. These servers likely reside in a different state or country than I do. Therefore, my Internet service provider (ISP) must determine how to route my web browser's request to the server across the Internet. To maintain their routing tables, ISPs and Internet backbone companies use a protocol called Border Gateway Protocol (BGP).



(TNS) — Rather than let FEMA trailers sit empty at the Bay County Fairgrounds group site and the staging area in Marianna, Panama City, Fla., is asking to be given the opportunity to put people in them.

City Manager Mark McQueen said the city is negotiating with the Federal Emergency Management Agency to try to acquire the surplus trailers. As of last week, there were more than 50 empty trailers at the fairgrounds campsite, according to FEMA reports, in addition to the ones that were staged in Marianna and never rolled out for use.

"Those have gone unclaimed because FEMA has been unable to make contact with those survivors," McQueen said at the recent City Commission meeting. "Knowing that there are some already established in our group sites and that there are another 70 up in Marianna that are not yet placed, we are striving to get those donated to the city."

The hope, according to McQueen, is to get 100 trailers that city officials can offer as interim housing to people who have fallen through the cracks.



Most companies that underinvest in business continuity can give you a reason why they do so, but those reasons are almost always ill-founded. In today’s post, we’ll look at the most common rationales organizations give for skimping on BC—and show you the reality behind those same topics.

In working as a business continuity consultant, I’ve had the opportunity to become familiar with companies that come from across the spectrum in terms of the level of their BC planning. This includes many organizations with stellar programs and also many that do not fully implement their BC plan or have no BC program at all.

The companies that skimp on BC are almost always very articulate in explaining why they think it’s not worthwhile for them to develop a robust BCM program. However, the reasons they give are almost always based on false assumptions and incomplete information.



(TNS) - In Congress, battles are raging over disaster relief spending. Who should get the help? Puerto Rico, still seeking emergency reconstruction money in the wake of 2017 Hurricane Maria (and yes, Puerto Rico is part of the United States and just as deserving of help as, say, North Carolina)? How about Hawaii, where volcanic eruptions have seen molten lava destroy homes, roads and other infrastructure? Nebraska and Iowa, which were inundated by some of the worst flooding in their history? California, trying to rebuild from the most widespread and deadly wildfires the state has ever seen? Or the Florida Panhandle and parts of Georgia, where homes and farms were wiped out by the violent Hurricane Michael last year?

All those disasters and more — they are a signature national wound of the 21st century, a growing roster of attacks by natural forces that are unprecedented in their power and frequency. The object of current congressional fisticuffs is a $13 billion disaster aid package that tries to address many of those violent and devastating acts of nature. And it's not nearly enough to repair what's been broken, let alone do what's needed to prepare for a future that's likely filled with more such fire, wind and water.

Government at every level should have seen it coming, two or three decades ago. That's when we first became aware that climate change had begun, with warmer air and water temperatures and changing weather patterns that were producing more and bigger storms, and droughts where the land was once verdant. As The Washington Post reported this week, taxpayer spending on federal disaster relief funds is almost 10 times greater than it was three decades ago — and that's adjusted for inflation.



Today's application programming interfaces are no longer simple or front-facing, creating new risks for both security and DevOps

All APIs are different inside, even if they're using similar frameworks and architectures, such as REST. Under whatever architectural "roof," the data protocols are always different — even when the structure is the same.

You've likely heard of specific protocol formats, such as REST, JSON, XML, and gRPC. These are actually data formatting and transportation languages that act as APIs' spokes. Inside those formats is a lot of variation. These formatting languages are less "language" and more like airplanes that carry ticketed passengers that move through airports to get where they need to be. The languages passengers speak and their individual cultural details are highly different.

From a security perspective, the protocol itself does nothing. To be effective, security needs to translate the language and intention of each person coming through, not just let the passengers navigate freely.



Thursday, 25 April 2019 14:11

5 Security Challenges to API Protection

(TNS) — With a lot of hard work by the Shoalwater Bay Tribe, a vertical tsunami evacuation tower near Tokeland should be ready for “the big one” by the end of October 2020.

Shoalwater Bay emergency management director Lee Shipman said none of it would have been possible without a core group of driven individuals, particularly previous emergency managers like Dave Nelson and George Crawford.

“We wouldn’t have gotten the (grant) application done without their expertise,” said Shipman. “We are all passionate; we’re kind of like a tsunami evacuation tower gang.”

Nelson and Crawford were instrumental in forming the tribe’s emergency management plans. There are two tsunami warning sirens on the reservation; the one on the north end is named George, after Crawford; the one at the south end — off Blackberry Lane, next to where the evacuation tower will stand — is named Dave, after Nelson.



The Committee on Foreign Investment in the United States (CFIUS) recently forced the Chinese owner of dating app Grindr to divest its ownership interest, citing national security concerns. Fox Rothschild’s Nevena Simidjiyska explains what the decision means for companies who carry personal data going forward.

A new law has expanded the oversight powers of the Committee on Foreign Investment in the United States (CFIUS), and businesses are quickly learning that the interagency committee won’t hesitate to block a deal or force the divestment of a prior acquisition, particularly one involving sensitive customer data or “critical technologies” in industries ranging from semiconductors to social media.

Within the past two years, CFIUS blocked the acquisition of U.S. money transfer company MoneyGram International Inc., as well as a deal in which Chinese investors aimed to acquire mobile marketing firm AppLovin.



Rising to the cyber challenge

Our third Hiscox Cyber Readiness Report provides you with an up-to-the-minute picture of the cyber readiness of organisations, as well as a blueprint for best practice in the fight to counter the ever-evolving cyber threat.

Barely a week goes by without news of a major cyber incident being reported, and the stakes have never been higher. Data theft has become commonplace; the scale of ransom demands has risen steadily; and cumulatively the environment in which businesses must operate is increasingly hostile. The cyber threat has become the unavoidable cost of doing business today.

This is our third Hiscox Cyber Readiness Report and, for the first time, a significant majority of firms surveyed said they experienced one or more cyber attacks in the last 12 months. Both the cost and frequency of attacks have increased markedly compared with a year ago, and where hackers formerly focused mainly on larger companies, small-and-medium -sized firms are now equally vulnerable.



Wednesday, 24 April 2019 14:20

The Hiscox Cyber Readiness Report 2019

(TNS) - A warming Earth may add slightly more muscle to heat-hungry hurricanes, but also slash the number that form by 25 percent by the end of the century as drier air dominates the middle levels of the atmosphere.

According to a presentation given this week at the National Hurricane Conference in New Orleans, climate change is expected to intensify storms by about 3 percent, or a few miles per hour, by the year 2100.

Global warming likely added 1 percent to Hurricane Michael's Cat 5 power, or 1 to 2 mph, said Chris Landsea, tropical analysis forecast branch chief at the National Hurricane Center.

"That is a fairly small increase and most of the computer guidance by global warming models say maybe we could see 3 percent stronger by the end of the century," said Landsea, who spoke during a session on hurricane history. "That's really not very much."



Stopping malware the first time is an ideal that has remained tantalizingly out of reach. But automation, artificial intelligence, and deep learning are poised to change that.

The collective efforts of hackers have fundamentally changed the cyber defense game. Today, adversarial automation is being used to create and launch new attacks at such a rate and volume that every strain of malware must now be considered a zero day and every attack considered an advanced persistent threat.

That's not hyperbole. According to research by AV-Test, more than 121.6 million new malware samples were discovered in 2017. That is more than 333,000 new samples each day, more than 230 new samples each minute, nearly four new malware samples every second.



Wednesday, 24 April 2019 14:16

When Every Attack Is a Zero Day

The NYDFS cybersecurity requirements, first enacted in 2017, are now fully in place and helping to address glaring shortcomings in data security. OneSpan’s Michael Magrath provides a quick recap of the fourth and final phase of mandates to help organizations ensure they’re up to speed.

New York’s reputation as the “financial capital of the world” is legendary. The New York State Department of Financial Services (NYDFS) regulates approximately 1,500 financial institutions and banks, as well as over 1,400 insurance companies, and the overwhelming majority of financial institutions conducting business in the U.S. fall under NYDFS regulation – including international organizations operating in New York.

The NYDFS Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500), first enacted in 2017, are now fully in place, and all banks and financial services companies operating in the state must secure their assets and customer accounts against cyberattacks in compliance with its mandates.

The regulation requires financial institutions to implement specific policies and procedures to better protect user data and to implement effective third-party risk management programs with specific requirements – both digital and physical.



Even more are knowingly connecting to unsecure networks and sharing confidential information through collaboration platforms, according to Symphony Communication Services.

An alarming percentage of workers are consciously avoiding IT guidelines for security, according to a new report from Symphony Communication Services.

The report, released this morning, is based on a survey of 1,569 respondents from the US and UK who use collaboration tools at work. It found that 24% of those surveyed are aware of IT security guidelines yet are not following them. Another 27% knowingly connect to an unsecure network. And 25% share confidential information through collaboration platforms, including Skype, Slack, and Microsoft Teams.  

While the numbers may at first appear alarming, there's another way to look at them, says Frank Dickson, a research vice president at IDC who covers security.

"What I see is a large percentage of workers who view security as an impediment," Dickson says. "When security gets in the way of workers getting their jobs done, people will go around security. Companies need to provide better tools so people can be more effective."



(TNS) - After the apocalyptic Camp Fire reduced most of Paradise to ashes last November, a clear pattern emerged.

Fifty-one percent of the 350 houses built after 2008 escaped damage, according to an analysis by McClatchy. Yet only 18 percent of the 12,100 houses built before 2008 did.

What made the difference? Building codes.

The homes with the highest survival rate appear to have benefited from “a landmark 2008 building code designed for California’s fire-prone regions – requiring fire-resistant roofs, siding and other safeguards,” according to a story by The Sacramento Bee’s Dale Kasler and Phillip Reese.

When it comes to defending California’s homes against the threat of wildfires, regulation is protection. The fire-safe building code, known as the 7A code, worked as intended. Homes constructed in compliance with the 2008 standards were built to survive.



Grounded Boeing Angers A Whole Value Chain

Boeing’s having a tough run. The self-proclaimed world’s largest aerospace company is under “intense scrutiny” after two crashes involving its 737 MAX jets, with governments around the world grounding planes, massively affecting travel and airline operations. Boeing finds itself in the center of a terrible storm of angry consumers, buyers, and regulators.

Not The First Time . . . But The Worst Time

This isn’t the first time Boeing planes have crashed — but PR-wise, it’s the worst. What’s different serves as caution for all leaders, regardless of industry. The zeitgeist has changed: No company is immune to the demands of empowered customers, not even B2B companies like Boeing. In Boeing’s case, the empowered customers are not just airlines but also the flying public. B2B companies never really had to worry about public scrutiny with its volatile fury. In an industry’s value chain, they played safely in the background, behind their B2C buyer. In this case, airline manufacturers historically didn’t interact with passengers post-crash but instead worked with regulators. A US presidential tweet hurled the issue into the public realm, a virtual court whose norms disregard protocol.



Don't let social media become the go-to platform for cybercriminals looking to steal sensitive corporate information or cause huge reputational damage.

Social media has become the No. 1 marketing tool for businesses, with 82% of organizations now using social media as a key communication and promotional tactic. It has become the window to a business, enabling companies to build a following, engage with clients and consumers, and share news and updates in a cost-effective way.

While social media can be a great tool, there are also a number of associated security threats. Just by having a presence on the platforms, organizations of all sizes put themselves at risk.



Sometimes problems result when the IT department does its own recovery planning then BC comes along and conducts an analysis that shows IT’s plans to be inadequate. In today’s post, we’ll look at why this gap in recovery strategies is dangerous and what you as a business continuity professional can do to narrow it.


The lack of alignment on key recovery objectives between IT and the business continuity team can lead to catastrophic impacts to customer service, operations, shareholder value, and other areas in the event of a critical disruption.

However, this is an area where the IT department deserves a good amount of sympathy and understanding from the BC team.

The problem starts when the IT team sets about working on its own to develop recovery plans for the organization’s systems and applications. Often they are told to do this by management, and they typically do the work in a silo, with minimal cooperation from other departments.

In devising their recovery plans, the IT department is usually flying blind because they have a limited view of the larger needs of the organization.  



Sixty-four percent of global security decision makers recognize that improving their threat intelligence capabilities is a high or critical priority. Nevertheless, companies across many industries fail to develop a strategy for achieving this. Among the many reasons why organizations struggle to develop a threat intelligence capability, two stand out: Developing a mature threat intelligence program is expensive, and it’s difficult to determine viable protections without a cohesive message of what works effectively. Fortunately, the digital risk protection (DRP) market provides a solution to the threat intelligence problem for both enterprises and small-to-medium businesses (SMBs) alike.

Digital risk protection services substantially improve an organization’s ability to mitigate risk by providing the organization with actionable and relevant intelligence. By simulating an outsider’s perspective of an organization’s digital presence, security professionals working for the organization can better determine which of their assets are most at risk and develop solutions to better protect those assets. Additionally, DRP services can be utilized to protect a company’s reputation by scouring the web for instances of data fraud, breaches, phishing attempts, and more.



Monday, 22 April 2019 16:40

Understanding The Evolving DRP Market

Compliance has yet to adopt a proper management system to substantiate the critical role they play. SEI’s Kevin Byrne discusses how, rather than continuing to raise compliance issues as they occur, CCOs should graduate to consistent, ongoing management-level reporting.

Compliance programs today are at an interesting crossroads. In 2004, the SEC adopted rule 206(4)-7, requiring all registered investment companies and investment advisers to adopt and implement written policies and procedures reasonably designed to prevent violation of the federal securities laws. Firms learned they had to review those policies and procedures annually for their adequacy and the effectiveness of their implementation and to designate a chief compliance officer (CCO) to administer the policies and procedures. Thus, the compliance program as we know it today was born.

Firms hired CCOs and tasked them with creating programs to protect investors and comply with federal securities laws. CCOs built their programs with the tools of the time – principally Microsoft Office – and while there is more experience to draw from, they largely continue to manage their programs the same way today. Policies and procedures are maintained in MS Word. Risk assessments are maintained in Excel. Communications are stored in Outlook. Documentation is maintained on shared drives or in SharePoint.



Recent studies show that before automation can reduce the burden on understaffed cybersecurity teams, they need to bring in enough automation skills to run the tools.

Cybersecurity organizations face a chicken-and-egg conundrum when it comes to automation and the security skills gap. Automated systems stand to reduce many of the burdens weighing on understaffed security teams that struggle to recruit enough skilled workers. But at the same time, security teams find that a lack of automation expertise keeps them from getting the most out of cybersecurity automation. 

A new study out this week from Ponemon Institute on behalf of DomainTools shows that most organizations today are placing bets on security automation. Approximately 79% of respondents either use automation currently or plan to do so in the near-term future.

For many, automation investments are justified to management as a way to beat back the effects of the cybersecurity skills gap that some industry pundits say has created a 3 million person shortfall in the industry. Close to half of the respondents to Ponemon's study report that the inability to properly staff skilled security personnel has increased their organizations' investments in cybersecurity automation. 



Monday, 22 April 2019 16:38

The Cybersecurity Automation Paradox

Some folks see trees when they look up at clouds. For others, clouds may take the form of a rabbit. But when IT professionals stare at clouds, they can’t help but picture a hosted private cloud with micro-segmentation. And for good reason.

What IT professionals see when they look at clouds

An increasing number of organizations are moving to the cloud for its obvious benefits. But along with this transition comes a greater need for more advanced cloud security measures. Micro-segmentation is one of these measures.

Unlike traditional security defense strategies like firewalls and edge devices that protect the flow of north-south data by focusing on the perimeter, micro-segmentation focuses on the inside, isolating individual workloads to protect traffic that’s traveling east-west within a data center. So even if a bad actor manages to get past your perimeter security measures, micro-segmentation will prevent the attack from spreading.

Failing to adapt security to meet the growing needs of increasingly complex IT environments can be catastrophic.

With cloud security top of mind for IT professionals, it’s no wonder they’re seeing it everywhere they look.


Recently, the department for Digital, Culture, Media & Sports in the United Kingdom released the Cyber Security Breaches Survey 2019.

The survey discusses statistics for cyberattacks, exposure to cyber risks, the awareness and attitudes of companies around cyber risk, and approaches to cybersecurity. Here are the four takeaways from the survey (all statistics included in this briefing are part of the survey).



Charlie Maclean Bristol discusses whether you should consider likelihood when conducting a risk assessment as part of the business continuity process. Do you need to know how likely it is that a threat will become an actuality; or is knowledge of the impact of the threat enough?

Business continuity has always had a slightly uneasy relationship with risk management. In the 2010 and 2013 BCI Good Practice Guidelines (GPGs) we looked at threat assessments, whereas in the more recent 2018 GPG, we cover a threat and risk assessment. This issue of conducting a threat assessment instead of a risk assessment was driven by a certain character in business continuity circles who was very anti-risk assessment, and hence pushed the idea of threat assessment in the two earlier GPGs.

Nowadays, risk assessment is coming of age and it seems to be everywhere. You need a risk assessment for climbing up a ladder and you also need one for running a massive multinational organization.

This article was inspired by a talk given by Tony Thornton, ARM Manager for ADNOC Refining, which I heard at The BCI UAE Forum in February. During his talk on risk assessment, he focused on there being no point in looking at likelihood when you are doing a business continuity risk assessment. He said that having a 3x3 or even a 5x5 scale was meaningless in terms of likelihood. The point he was making was that if there was a possibility it could happen, then that was good enough: and how likely it was to happen didn’t really matter. He was more enamoured with impact, which he said was worth looking at, as well as differentiating between high, medium and low impacts.



'Sea Turtle' group has compromised at least 40 national security organizations in 13 countries so far, Cisco Talos says

A sophisticated state-sponsored hacking group is intercepting and redirecting Web and email traffic of targeted organizations in over a dozen countries in a brazen DNS hijacking campaign that has heightened fears over vulnerabilities in the Internet's core infrastructure.

Since 2017, the threat group has compromised at least 40 organizations in 13 countries concentrated in the Middle East and North Africa, researchers from Cisco Talos said Wednesday.

In each case, the attackers gained access to, and changed DNS (Domain Name System) records of, the victim organizations so their Internet traffic was routed through attacker-controlled servers. From there, it was inspected and manipulated before being sent to the legitimate destination.  



Steve Blow explains that while businesses must remain consistently focussed on digital transformation in order to not fall to the back of the pack, digital transformation efforts could be futile if businesses don’t address and improve their IT resilience.

The market as we know it has been changing dramatically over the last decade, with each digital development outpacing the other at every turn in the track. Companies that are too stuck in their ways are being overtaken by contemporary companies, unencumbered by legacy and real estate, which are in line with the latest developments in IT.

This said, almost every single business must remain consistently focussed on digital transformation in order to keep up with developments; taking on new digital initiatives to drive efficiencies, create new experiences, and ultimately, beat the competition. According to recent research (1), 90 percent of businesses see data protection as important or critical for digital transformation projects. However, the same research revealed that the proper technological provisions are not yet in place, in order for these same businesses, striving to achieve digital transformation, to deliver on demands of data protection assurance.

It has become increasingly clear that having the right foundations early on in any digital journey is a critical factor in the success of transformation initiatives. So, building data protection within a robustly resilient IT infrastructure will be of paramount importance for businesses. Not only will this be critical for businesses to succeed day-to-day, but also to ensure complete transformation, modernization and cohesion. From my experience, there are three recommendations that could be key to help businesses achieve this:



I occasionally find people mapping their SOC capabilities to the ATT&CK framework by checking off specific techniques that they have shown they are able to detect with the intent of measuring coverage within their SOC. In this blog post, I hope to clarify why this strategy may be misleading.

There Are No Bad Actions, Only Bad Behavior

It’s almost impossible to have a high-confidence indictment of a process based on a single behavior. Hypothetically, if there were such a thing as a purely malicious operation, the system would not have been designed with this capability, or it would have been patched out. While there are certainly exceptions (things you would absolutely want to know if they happen in your infrastructure), it’s important to understand ATT&CK techniques as the building blocks of a cyberattack and that they are not malicious in and of themselves.



Executive coach and strategic advisor Amii Barnard-Bahn provides guidance on how executives can prepare for a board appointment: Start by following the 10 steps outlined here.

A lifelong diversity advocate, I testified in multiple legislative committees on the successful passage of California’s SB826, the first law in the U.S. requiring corporate boards to include women. This legislation was designed to create more access for diverse and qualified candidates for public boards. “More access” is important because the role of the board has become critical to the long-term health of a company and the protection of its shareholders and employees. Creating a larger pool of seasoned professionals to guide and govern our corporate institutions is paramount in a time of TeslaPapa John’sTheranos and CBS debacles.

A board search can take many years, so it’s never too early to evaluate and cultivate the skills and network you need to establish yourself as a viable candidate.



Wall Street loves a digital business. These technology-driven innovators, which put customer acquisition, retention, and experience at the center, have a different way of looking at the world. They are rewarded with growth and investment.

And it’s not just digital natives. Digitally advanced incumbents, firms such as Accenture, Capital One, Microsoft, and Philips, also see the world through a technology opportunity lens. They are also rewarded.

What do digitally advanced companies look like? How are they different from companies just starting their digital transformation? To find out, we analyzed the digital maturity of 793 enterprises in North America and Europe. We found digitally advanced firms in every industry, from retail and consumer products to manufacturing and financial services.



Archived data great for training and planning

By GLEN DENNY, Baron Services, Inc.

Historical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred.

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

public safety historical weather dataThe optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

Flooding in large swaths of the Midwest has already claimed the lives of at least three people and has caused $3 billion in damages.

A combination of melting snow and rainstorms led to breaches in levees along the Missouri River and other bodies of water.

According to FEMA flood map data, 40 million people in the continental U.S. are at risk for a 100-year flood event; that’s three times more than previously estimated. Additionally, the amount of property in harm’s way is twice the current estimate.

With communities underwater and many more at risk, officials are asking themselves how response plans can be improved.



(TNS) — As the waves of runners left Hopkinton to run the 2019 Boston Marathon, a roomful of public safety officials watched their computers, monitored video screens and radios, and talked to one another as a rolling list of incidents appeared on a screen on a wall.

A runner fell and fractured an arm. A drone was detected. An unattended package was found and cleared.

On marathon day, as 30,000 runners and countless spectators take to the streets, the Massachusetts Emergency Management Agency runs a "unified coordination center" in MEMA's underground bunker in Framingham.

The goal, said MEMA spokesman Christopher Besse, is to bring together local, state and federal public safety officials in one place so they can coordinate their responses to whatever the day brings — from weather to terrorism.



Don Boxley looks at some important questions that need to be asked to ensure that business continuity and data security are considered during digital transformation projects.

Whole industries are transforming with the help of IT and workforce digitization and as competition heats up across virtually every industry, the pressure to digitally transform escalates concurrently. 

Whether you are in IT or are a business professional who is responsible for digitization, business continuity and/or security strategies, you need to be able to think on your feet about your new priorities in a world of ongoing change.

While there are numerous variables that organizations must consider as they move towards digital transformation, perhaps the most essential considerations are business continuity and data security. With more business than ever being conducted in the cloud and more third-party partners needing digital access to that data, failing to keep business continuity and data security at the top of your business’s priority list could instantly become a fatal mistake – after all, they are often inexorably linked. 

In today’s cloud environments, one of the most important data security challenges relates to strategic partner data access and sharing. Your organization’s security safeguards are only as strong as the weakest link in your vendor and partner ecosystem. In other words, you may be inadvertently putting sensitive company data at risk every time you conduct digital business with a vendor that is granted access to your system.



(TNS) — As approximately 1,700 households and businesses remained without electricity in Mower and Freeborn counties Saturday morning, Minnesota Gov. Tim Walz said many in the state are likely unaware of the devastation caused by this week’s storm.

“If you have power at your house, the snow is going to be melted probably by tomorrow or whatever, so it appears like nothing really happened, but this was pretty catastrophic,” he said, noting power outages had wide-ranging impacts from personal medical needs to large-scale farming operations.

Walz was in Austin Saturday morning to meet with Minnesota National Guard members and sheriffs from Mower and Freeborn counties, as well as those tasked with returning power to the businesses and homes throughout the area.



The answer can lead to a scalable enterprise security solution for years to come

In early December 2018, several major corporate breaches were made public. As the news was shared and discussed around my company, one of my colleagues jokingly asked, "I wonder if I can gift some of this free credit monitoring to my future grandchildren." It was a telling comment.

Today, every organization – regardless of industry, size, or level of sophistication – faces one common challenge: security. Breaches grab headlines, and their effects extend well beyond the initial disclosure and clean-up. A breach can do lasting reputational harm to a business, and with the enactment of regulations such as GDPR, can have significant financial consequences.

But as many organizations have learned, there is no silver bullet – no firewall that will stop threats. They are pervasive, they can just as easily come from the inside as they can from outside, and unlike your security team, who must cover every nook and cranny of the attack surface, a malicious actor only has to find one vulnerability to exploit.



(TNS) - A repeat of the most powerful earthquake in San Francisco’s history would knock out phone communications, leave swaths of the city in the dark, cut off water to neighborhoods and kill up to 7,800 people, according to state and federal projections.

If a quake like that were to strike along the San Andreas Fault today, building damage would eclipse $98 billion and tens of thousands of residents would become homeless.

Thursday marks the anniversary of the 1906 quake, a 7.9-magnitude event that turned San Francisco streets into waves, flattening much of the skyline and igniting fires that raged for almost four days. The quake ruptured 296 miles of fault line — from Cape Mendocino to San Juan Bautista.

Since 1906, the fault has remained locked from Point Arena through the Peninsula. The 1989 Loma Prieta earthquake hit 50 miles south of San Francisco, on a remote segment of the San Andreas Fault, and ruptured only 25 miles.



With regulations domestically and abroad changing constantly, the risk of noncompliance is ever present. Fenergo’s Rachel Woolley discusses how this will impact functions beyond compliance.

Regulatory activity has been ramping up recently, and it doesn’t look to be slowing down in 2019. In an era of hyper-regulatory scrutiny, financial institutions find themselves in a constant battle between impending regulatory deadlines and the risk of noncompliance. Add to this the complexity of cross-jurisdictional regulations that vary across different countries even within the same region. The Asia-Pacific region is a prime example; with over 40 regulators in the same region, each with slightly varied rules and requirements, adhering to cross-border regulatory requirements is extremely challenging.

But it’s not just the compliance teams who are affected. As the challenge of regulatory change management increases, divisions and activities beyond the compliance function may potentially be impacted, including data management, operations, client-facing teams, client experience and time-to-revenue. The process needs to be managed and measured methodically in order to manage wide-ranging regulatory change in line with available budgets and resources.



In a previous article, we discussed how personal insurance policies address communicable diseases and epidemics. In this article, we’ll look at how commercial insurance policies handle these issues.

Between 1918 and 1919 the so-called Spanish influenza pandemic* killed at least 50 million people worldwide and infected about 500 million people – or about 1/3 of the entire world’s population at the time.

While the Spanish flu’s destructiveness has been an outlier over the last several decades, epidemics and pandemics on a smaller scale do still happen (avian flu, swine flu, Ebola, etc.).

How could disease outbreaks impact commercial property and general liability insurance?



(TNS) - As approximately 1,700 households and businesses remained without electricity in Mower and Freeborn counties Saturday morning, Minnesota Gov. Tim Walz said many in the state are likely unaware of the devastation caused by this week’s storm.

“If you have power at your house, the snow is going to be melted probably by tomorrow or whatever, so it appears like nothing really happened, but this was pretty catastrophic,” he said, noting power outages had wide-ranging impacts from personal medical needs to large-scale farming operations.

Walz was in Austin Saturday morning to meet with Minnesota National Guard members and sheriffs from Mower and Freeborn counties, as well as those tasked with returning power to the businesses and homes throughout the area.



Consider the following: Baseball is the only team sport where the defense has control of the ball. The side currently in offense does not handle the ball as they would in any other sport. A player does not score in baseball by bringing the ball to the finish line or passing it through a goal, but by trying to beat the ball to a goal. This sets it apart from games like basketball, soccer, football, and many others, and adds an interesting complexity. For me, the internal mechanics of baseball are the most interesting, similar to the work that a business does to set up a Business Continuity Plan.

Situational awareness in the game relies on a player reading signs and signals from other players, both on their own team and on the opposing team. A player might need to decipher the intent of the opposing player on 2nd base, and then relay back to the batter what the next pitch may be. A player might also need to relay signs on what the next pitch is from the middle infielders to the outfielders, so that they know where to position themselves or in what direction to take their first step.

My passion for baseball comes from a love of the strategy involved. The same type of strategy that makes a chess game so intriguing to watch also makes baseball continually exciting. You should know your opponent, their tendencies, strengths, and weaknesses, and then capitalize on that knowledge with the proper timing, all while continually learning from mistakes and honing your strategies for the next opponent.



Friday, 12 April 2019 15:13

Playing Hardball

The last couple weeks have been an exciting time for the customer data platform (CDP) category. At long last, major marketing technology vendors formally declared their intentions to get serious about managing and activating data for marketing. For the CDP community, the entry of marketing clouds is a big deal, carrying equal parts excitement over the implied market validation and concern (nay, fear?) as competition intensifies.

The concept of CDPs originated about three years ago in response to the very real challenges of collecting and leveraging data for marketing. Since then, a broad range of vendors offering an equally broad variety of solutions claimed the label and have been marketing themselves as such. At their core, CDPs promise to unify corporate and customer data and make it accessible to marketers for analytics and campaigns. But Forrester believes that standalone CDPs aren’t equipped to solve this problem for enterprise B2C marketers. For these reasons, Forrester welcomes continued progress from CDPs as well as new solutions entering the market. The question about CDPs was never whether there’s a business problem to address but rather who would ultimately solve it.

It was nearly inevitable that large martech vendors would join the fray. Forrester made the call in 2018 that marketing clouds would enter this market and have solutions in place by the end of 2019. In our October 2018 report, we stated that: “Ultimately, CDPs’ greatest competitive threat is the marketing clouds, such as Adobe, Oracle, and Salesforce, that are already ingrained in most enterprise martech stacks and are investing in capabilities far more sophisticated than CDPs’.”



As consumers increasingly rely on cashless spending, the PCI SSC has identified a process to secure cardholder data. Acceptto CEO Shahrokh Shahidzadeh discusses why it’s time to replace password-based credentials.

According to a recent study by the PEW Research Center, consumers in the U.S. are relying less on physical currency. The report found that “roughly three in 10 U.S. adults (29 percent) say they make no purchases using cash during a typical week.” In addition, a generational trend shows that “Americans under the age of 50 are more likely than those ages 50 and older to say they don’t really worry much about having cash on hand.”

As American consumers increasingly rely on cashless spending, it is no wonder that the Payment Card Industry Data Security Standard (PCI DSS) arose to develop a set of requirements applying to companies of any size that accept credit card payments.



The Federation of European Risk Management Associations (FERMA) has expressed concern about the ISO/IEC 27102 ‘Information Security Management Guidelines For Cyber Insurance’ standard, which is currently under development.

FERMA says that the proposed standard is “Premature and inappropriate in its current form given the fast pace of technological development” and also states that “No other insurance product is the subject of an ISO standard”.

FERMA members, the UK risk management association Airmic, French association AMRAE and Belgian association BELRIM, and insurance industry representatives have also expressed concerns about the project.

FERMA has urged other member associations to help ensure their national standardization body is aware of the concerns of the whole insurance market.



Donna Boehme, the “Lion of Compliance” shares that true compliance SME is the first and most foundational element of a strong compliance program. An experienced CCO with true compliance SME earned in the field and in the profession understands on many levels the multidisciplinary nature of the work, the optimal way to educate and facilitate collaboration and what can realistically be achieved through each phase or cycle of a strong, effective compliance program that supports and is driven by a culture of ethical leadership.

In 2016, two researchers from the University of Michigan’s Stephen M. Ross School of Business published a report on their study “Why Don’t General Counsels Stop Corporate Crime?” The simple answer: “Because it’s not their job!”

This is precisely why true compliance subject matter expertise, earned in the field and with the profession successfully designing and managing compliance programs (“Compliance SME”), is the first and foundational element of the modern Compliance 2.0 model. The modern 2.0 model recognizes compliance as an independent profession, distinct from Legal, with the subject matter expertise (SME) needed by senior management to lead and advise its approach to the modern and existential issues of compliance, ethics, culture and reputation.

The modern Compliance 2.0 model takes the place of the failed Compliance 1.0 model that was based on a naïve and misinformed assumption by boards and CEOs that compliance should be structured as a captive subset of legal and thus driven solely by the legal mandate and mindset. That flawed model failed to accommodate the stark realities that compliance and ethics was emerging as a completely separate profession and SME from legal, with very different mandates, core competencies, practices and skill sets. At the same time, advocates for the in-house bar were sensing an opportunity to respond to the chaotic legal services market and claim the new role of Chief Compliance Officer for the legal field. Yet, in their zeal to claim the CCO role as nothing more than a “legal lieutenant” and a “process integrator,” these voices resulted in driving compliance into a flawed model destined to fail because it lacked true compliance SME and positioning to drive its distinct independent mandate.



Friday, 12 April 2019 15:05

What is Compliance SME?

Although a crisis communications manual might look to be a complex contraption to the untrained eye, what the manual needs to accomplish can simply be condensed to two important things: putting the processes in place for the communication with stakeholders during a crisis and organizing the internal processes that allow the first thing to happen smoothly.

The manual, just to give an example, will both make sure that the journalists receive the information they need to be able to report on the crisis, and that the person who communicates with the journalist has the right resources in place to provide them with timely and accurate information.

Over the years, I have audited a great many manuals and what I found is that very often the same mistakes are made. Here is a look at what will go wrong.



Recent events in the news as well as trends in my own work have reminded me of how important it is for business continuity professionals to help protect their organizations against the impact of cyberattacks. In today’s post, I’ll list some ways BC teams can help their companies fend off this rising threat.


The news this week contained stories reporting a serious recent malware attack against the City of Albany, New York. Malware attacks are a kind of computer extortion, where hackers encrypt an organization’s data and refuse to provide the key unless a ransom is paid.

One of the most concerning aspects of the story was that hackers reportedly obtained the personal banking data of some city employees and used it to raid those employees’ bank accounts.

This reminded me of how important it is for BC professionals to help their organizations fend off and recover from cyberattacks.



It’s a scenario no business wants to think about: an active shooter or violent offender on the premises. From 2000 to 2017, there were 250 active shooter incidents in the United States. These horrific acts of violence took place across industries and geographic locations. According to the Bureau of Labor Statistics, 2016 alone saw 500 workplace homicides in the U.S.

We now face an unfortunate reality: no company is exempt from the potential threat of an act of violence occurring at their organization. As a result, businesses must be proactive in order to protect their people, minimize injury and loss of life, and safeguard their establishment.

Preparation, effectively communicating with staff, and maintaining protocol are critical measures every business should take when dealing with workplace violence. There’s no such thing as “too safe” when it comes to protecting human life.



An organization’s weakest link is most often human, not technological. Moss Adams’ Francis Tam explains why, when it comes to cybersecurity, anomalies like daily logins, users and infrastructure changes should be an organization’s main concerns.

In today’s technology-driven world, information can be a company’s most valuable – yet vulnerable – asset. Data breaches continue to become more frequent and costly in recent years, with many high-profile cases like the Equifax breach in 2017 making headlines. It’s crucial, then, for companies to properly utilize data monitoring and cybersecurity audits to avoid breaches or having information stolen.

Breaches can cost companies an average of $3.9 million and an alarming 54 percent of companies will experience a cyberattack at some point. Full IT assessments can be time-consuming and costly, so companies often skip this crucial process or don’t make it a priority, leaving them vulnerable. Implementing data monitoring for your company’s cybersecurity can help prevent major breaches.



(TNS) — The National Park Service has awarded the territory a little over $10 million to assist in the restoration of hurricane-damaged historic sites.

The supplemental funding was granted to the Virgin Islands State Historic Preservation Office from the Historic Preservation Fund, which will allow for the repair of hurricane-damaged National Register-listed or eligible sites throughout the territory, according to a news release from the V.I. Department of Planning and Natural Resources.

The announcement comes 18 months after hurricanes Irma and Maria tore through the territory, causing serious damage to a number of historic sites and monuments.

All of the Virgin Islands’ historic resources were included on the 31st annual list of “America’s 11 Most Endangered Historic Places,” compiled by the National Trust for Historic Preservation in 2018.



Integrating cloud environments is anything but easy. Evaluating the security risks in doing so must be a starting component of an overall M&A strategy.

Mergers and acquisitions are an essential part of the enterprise business landscape. These deals foster innovation and create some of the biggest and most successful companies in the world.

But one of the largest potential pitfalls in any M&A transaction is mishandling IT integration and creating or failing to mitigate security risk. In the era of cloud computing, the cost of inheriting poor security can be massive and quickly destroy any value the transaction poses.

In addition, a common misconception is that if the two companies merging both operate in the cloud, integration will be easier. The reality is it's actually harder due to the added complexity — no two cloud environments are identical, and the rate of change is so much faster compared with traditional IT. Post-acquisition IT integration used to take five to ten years, but these days, given the nonstop pace of innovation, organizations don't have that luxury.



Thursday, 11 April 2019 15:00

Merging Companies, Merging Clouds

(TNS) - It’s tornado season in Oklahoma; that time every year when my neighbor shuffles the beloved baby portraits of her kids from the mantle to the storm shelter.

For businesses, the seasonal fear, of course, is that they’ll lose their most precious asset: data.

Oklahoma City-based Midcon Recovery Solutions has a precaution for that: two unmarked, double steel-reinforced, windowless concrete buildings in Oklahoma City and one in Broken Arrow in which the company hosts the data of hundreds of organizations — from energy and telecommunications companies to insurance agencies and banks. For $100 a month to several thousands of dollars, companies rent rack spaces of 1 ¾ inches to 200 square feet.



In my tech market forecasts, I am starting to see the intersection of two worrisome trends:

  • software subscription fees for multi-tenant SaaS or for single-instance hosted software are rising rapidly, with a growing percentage related to existing software as opposed to new, and with a high percentage having fixed annual fees or fees tied to metrics with little relationship with company revenues;
  • a small but rising risk of recessions, which could reduce company revenues by 5% or more.

The combination of these two factors could place CIOs in a bind where a significant portion of their tech budget is rising inexorably but the potential for their CEOs to ask them to cut tech budgets is also rising.  To see whether CIOs will face this situation, I would recommend that they ask and answer the following questions:



Gartner surveyed over 300 Chief Audit Executives (CAEs) in 2018 on their resource and time investments, priorities and challenges in 2019. Gartner VP Malcolm Murray examines the report’s key findings on the impact to the audit function.

Today’s audit leaders are grappling with a double-edged sword, according to our latest “state of the audit function” research survey at Gartner. Technology-driven change has the potential to drastically improve the efficiency of routine audit tasks, improve the quality and actionability of the insights audits provide to the business and deepen ownership of risk management in the business. At the same time, however, this shift is creating new risks and business models faster than audit and other assurance functions can keep up with.


Harnessing data analytics and robotic process automation technology to support audit’s workflow is critical, but it needs new skills. These skills, however, require financial investment. Yet budget growth fell to just 2 percent in 2018, down from 5 percent in 2016 and 2017; therefore, audit needs to get smart about how it uses its scarce skills.

With data and analytics experts in very high demand across all functions, industries and geographies, many audit leaders will struggle to transform their function with new capabilities to cope with higher-velocity business processes in the digital age.

The scarcity of critical skills could explain the rise in prevalence of co-sourced resources in audit functions, with its share of total audit budget creeping up from 8 percent in 2017 to 9 percent in 2018, and 67 percent of organizations saying they used co-sourced audit support in 2018, up from 62 percent in 2017. In any case, whether or not the audit function has the skills it needs, it seems clear that technology-related change will continue to disrupt and expand the range of business activities and processes for which audit must provide assurance.



Compliance officers eligible to participate in the SEC and CFTC whistleblower programs must navigate strict rules. Speaking up always carries risk, but – as Michael Filoromo and Zac Arbitman explain – the SEC, CFTC and various federal and state laws protect whistleblowers from retaliation.

The first article in this series provided an overview of the whistleblower award programs established by the Securities Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) and the eligibility criteria for compliance personnel to serve as SEC or CFTC whistleblowers. This article outlines the steps involved in submitting tips and claiming awards, as well as the anti-retaliation protections available to whistleblowers who speak up about wrongdoing.

Procedures for Submitting a Tip

To submit a claim under the SEC and CFTC whistleblower programs, an individual must file a tip, complaint or referral (TCR) form detailing their allegations. When preparing tips for submission to the SEC or CFTC, whistleblowers and their counsel should make sure that the TCR form and accompanying exhibits present the most comprehensive and compelling evidence. With the SEC and CFTC receiving a steadily increasing number of tips – 5,200 in 2018 alone – it is important that a first read of a whistleblower tip provide agency staff with a sound understanding of the alleged violations and, to the extent possible, a roadmap to investigate and prove the wrongdoing.

Whistleblowers should describe in detail the particular practices and transactions they believe to be unlawful, identify the individuals and entities that participated in or directed the misconduct and provide a well-organized presentation of whatever supporting evidence the whistleblower possesses. Under no circumstances, however, should whistleblowers give the SEC or CFTC information that is protected by attorney-client privilege, as the agencies cannot use privileged information in an investigation or enforcement action. The mere receipt of such information can interfere with and significantly delay the staff’s ability to proceed. This is a particularly important consideration for compliance personnel, who often work and communicate with in-house and external counsel.



Mocking new technology isn't productive and can lead to career disadvantage

As security leaders, do we spend as much time trying to understand our businesses as we do trying to understand the threats we face? It seems that we focus intently on emerging threats, but what about emerging technology?

Successful adoption of emerging technology can lead to a competitive advantage. Yet we CISOs have a history of lambasting emerging technologies — cloud, mobile, machine learning, and now blockchain — discounting the value as "pure hype." This practice of mocking new technology isn't productive and can lead to career disadvantage.

Think about this scenario. A web application that is integral to a major new marketing campaign is about to launch and the security team is asked to assess it at the last minute. Sound familiar? As frustrating as this is, this scenario happens on a larger scale as a matter of course when it comes to emerging technology. Why?



In February, 31 State Attorneys signed a letter endorsing the identify theft rules and acknowledging the need for more secure authentication practices. OneSpan’s Michael Magrath discusses.

It is not every day that 62 percent of the state Attorneys General collaborate and present a unified response to the federal government. On February 11, 2019 31 AGs signed a letter to Donald Clark, Secretary of the Federal Trade Commission (FTC) in response to the FTC’s December 4 request for comment on the Identity Theft Rules, 16 C.F.R. Part 681 Project No. 188402.

The Identity Theft Rules (“the Rules”), known as the “Red Flags Rule” and the “Card Issuers Rule,” “require financial institutions and some creditors to implement a written identity theft prevention program designed to detect the “red flags” of identity theft in their day-to-day operations, take steps to prevent it and mitigate its damage.” Only these entities have the ability to stop a fraudulent account from being opened at their own place of business or to notify a consumer of a change of address in conjunction with a request for an additional or replacement card, which is a strong indicator that the account may have been taken over by an identity thief.

The AGs note that “the Rules complement the laws of states that have enacted laws requiring entities to develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of personal information.”



(TNS) — Flooding in several Minnesota towns could reach moderate to major stages in the coming week, according to National Weather Service forecasts.

For emergency managers and elected officials in many of those towns, though, it’s business as usual until the worst hits. And that’s if major flooding even materializes.

“It’s way too early to panic,” said Erika Martin, mayor of Oslo, Minn. “We have to watch, and it’s good to be prepared. But we take it day by day.”

Moderate flooding means some structures and roads near the Red River could become covered in water, while major flooding indicates “extensive inundation” of infrastructure, according to the service. The precise levels of river flooding varies by location.



In late March, Marsh announced the launch of a program with a number of leading cyberinsurance firms including Allianz, AXA, Beazley, XL, and Zurich to evaluate cybersecurity products and services. Products that meet a minimum standard of criteria receive the designation of “Cyber Catalyst” for their effectiveness in reducing cyber risk. The intent is for insurance premiums to decrease for companies using Cyber Catalyst products/services, though there is no indication of how much premiums will drop by. This is not the first time that cyberinsurers have announced partnerships with vendors in an attempt to sell more products and keep premiums down, but it is the most ambitious.



Giving a business continuity presentation to management is challenging at the best of times. If any of the people listening to you start acting out, it can become downright hairy.

Today’s post looks at some of the more common human-factor problems encountered in pitching your proposals to management and suggests solutions for dealing with them.

As a business continuity consultant, I’ve made hundreds of presentations over the years to upper management at organizations of all sizes and in a wide range of industries. Often these meetings are highly charged, especially when I’m advising management of critical exposures at their organization.

I know what it’s like to be a BCM manager presenting to management in order to obtain critical funding or approvals.



The New York Department of Financial Services (NYDFS) requires all regulated entities to adopt the core requirements of a cybersecurity program. Panorays’ Matan Or-El discusses the regulation’s impact on financial institutions.

The cybersecurity landscape is becoming increasingly volatile for financial institutions that are scrambling to fight off a barrage of cyberattacks like bots, credential stuffing, account takeovers and more. Those attacks are taking the form of banking Trojans along with ATM and mobile malware. With open banking on the horizon, financial institutions will increase their risks incrementally with the new services they offer. The protection of personal data, accounts and reputation is at stake.

With the deluge of breaches in the last year, it is a wonder that any personal data is left to protect that hasn’t already been sold on the dark web. These devastating trends have prompted lawmakers in New York State to institute the New York State Department of Financial Services Cybersecurity Regulation (NYDFS). This new regulation, which went into effect in March, outlines cybersecurity standards for financial institutions including credit unions, health insurers, investment companies, licensed lenders, life insurance companies, mortgage brokers, savings and loans associations, private bankers, offices of foreign banks and commercial banks.

The new regulation requires organizations to review their security risk and develop policies that meet compliance standards relating to data governance, classification, access controls, system monitoring and incident response. Organizations that are regulated are now required to adhere to these guidelines:



Attacks from insiders often go undiscovered for months or years, so the potential impact can be huge. These 11 countermeasures can mitigate the damage.

The fear of cyber breaches looms heavy for many businesses, large and small. However, many companies are so busy looking for bad actors throughout the world that they ignore the threat from within their own walls.

According to Verizon's Insider Threat Report — which analyzes cases involving bad actors from the 2018 Data Breach Investigation Report — 20% of cybersecurity incidents and 15% of the data breaches investigated within the Verizon 2018 DBIR originated from people within the organization.

What's scarier, these attacks, which exploit internal data and system access privileges, are often only found months or years after they take place, making their potential impact on a business significant.



Monday, 08 April 2019 16:06

Ignore the Insider Threat at Your Peril

WinMagic’s Garry McCracken discusses the encryption capabilities that are built into Linux, the gaps in protection/compliance risks, and what companies can do to address them.

When it comes to server protection, many enterprises overlook physical security risks. The common myth is that because the servers are in a data center or otherwise behind lock and key, and because the data is in perpetual use, encrypting the drives is unnecessary, as the data is never at rest. 

That’s particularly troublesome. All drives eventually leave the data center for repair or disposal, and having them encrypted is the best way to protect the data from unintentional exposure. And with the enormous number of breaches in the news and compliance regulations – GDPR, HIPAA and California’s Consumer Privacy Act and the like – the prudent advice is to encrypt everything, everywhere, all the time. 

Linux has built in encryption for several years now. So why, then, are enterprises still struggling with their encryption efforts?

To answer this question, let’s review the disk encryption capabilities that are built into Linux:



It may not be the most interesting aspect of protecting your business but optimizing policy configuration for firewalls and other security devices is an important consideration. Asher Benbenisty examines four common security policy errors, and shows how organizations can avoid them.

As security threats become more and more advanced, managing your network’s defences / defenses correctly has never been more critical.  The effectiveness of firewalls and other security devices depends on the security policies which control how they operate.  These policies, which can comprise tens or even hundreds of thousands of firewall rules, dictate what traffic is blocked, what is allowed, and where it’s allowed to go to enable security, ensure compliance and drive business productivity. 

It’s increasingly challenging to maintain these policies, so that the needs of the business are optimally balanced with the need to limit risk and be as secure as possible. In most organizations, business applications are being introduced or changed rapidly, to support more users or new functionality.  Organizations are also moving to virtualized and cloud infrastructures, which introduce new security controls and connectivity flows that must be managed if business applications are to remain secure and compliant at all times.  As such, it’s no surprise that Gartner estimates that 99 percent of firewall breaches are the result of simple misconfigurations. 

So, what are the most common and harmful misconfigurations that can creep into firewall rulesets and security policies? Let’s take a look at some of the most prevalent, and what can be done to avoid them.





In June 2017 Continuity Central published the results of a survey which looked at whether attitudes to the business impact analysis and risk assessment were changing. Two years on, we are repeating the survey to determine whether there has been any development in thinking across the business continuity profession.

The original survey was carried out in response to calls by Adaptive BC for the removal of the business impact analysis and risk assessment from the business continuity process.

Please take part in the survey at https://www.surveymonkey.co.uk/r/BIAandRA


Read the results of the original survey.


You’ve just invested in an emergency notification system. You’re eager to get the software up and running to keep your people safe, informed, and connected. But you hit a brick wall: you’re told the training will take two weeks, support is already unresponsive and costs extra, and integrating employee data? A complete debacle.

In the world of emergency communication software, a provider’s customer success capability has powerful implications. Quick setup is essential when you’ve got people and assets to safeguard.

Some Common Onboarding Pain Points

Unfortunately, organizations often face onboarding hurdles when they purchase mass notification software. In part, this can be attributed to outdated software and a cumbersome user experience that doesn’t facilitate an easy setup process. Oftentimes, however, a lack of dedicated support and a customer success focus adds undue burden to new buyers. Here are some common pain points organizations face when interacting with a poor support model:



Successful, secure organizations must take an aggressive, pre-emptive posture if they want true data security

Cybercriminals are always works in progress. Their knowledge and ability to bypass security systems are constantly advancing. As they gain knowledge, they develop and implement sophisticated impersonation methods that are proving increasingly adept at evading detection and gaining access to secure data. This happens as many of their targets fail to adequately upgrade their security solutions to detect and protect against them. Currently, cybercriminals have many soft targets, and they know what to do to penetrate their systems. This climate that works in favor of the attacker underscores how organizations, as potential targets, need to rethink their approach to data and system security.

One of the most common approaches a cybercriminal takes is to present as an employee or friend of the organization under attack. This is the path of least resistance for introducing malicious code to a system disguised as a trusted application. In this way, and without the proper, updated security protocol in place, hackers fly under the radar to access sensitive information and even extract money. The cost can be steep for an enterprise that is breached in this way. A loss of assets can be crippling, as can the perceived loss of reputation. As these attacks become more common, organizations must prepare and have a modern, flexible security strategy in place that incorporates several layers of security.



Yesterday’s post about insurance-related Guinness World Recordsgot me thinking: what other weird insurance policies are out there?

If you know much about insurance, you know that the first place to inquire about weird insurance policies is Lloyd’s of London, legendary clearinghouse for the strange and unusual. (And innovative: they were the underwriters for the world’s first auto policy, the first aviation policy, and soon the first space tourism policy.)

Naturally, Lloyd’s has an entire webpage dedicated to what it (in what I imagine to be staid, Oxford-accented English) calls “innovation and unusual risks.” Some top hits include insurance coverage for David Beckham’s legs (£100 million), Keith Richards’ hands ($1.6 million), and cricketer Merv Hughes’ trademark mustache (£200,000).

My personal favorite is insurance for members of a Derbyshire Whisker Club who wanted coverage for their beards against “fire and theft.” Theft?



Thursday, 04 April 2019 16:12


(TNS) — FEMA has informed Ascension Parish government and area congressional officials that new flood insurance rate maps can proceed without controversial development restrictions along area waterways, a parish council member says.

Parish Councilman Bill Dawson wrote to the mayor of Sorrento on Friday that Federal Emergency Management Agency officials told him and others during a recent meeting that the restrictions, known as floodways, could be removed from the new maps expected to take effect May 15.

"They also told us since the request had come from the Town of Sorrento, The City of Gonzales and the Parish of Ascension, all would have to request the removal of the Floodways request," Dawson wrote in an email shared with Town Council members Tuesday evening.

Dawson has been a primary proponent of the map changes as a way to lower flood insurance rates for residents south of Gonzales and in Sorrento and the Burnside area.



Oracle customers should renegotiate their commercial relationships with this important vendor, using adoption of Oracle’s SaaS products as an incentive and cancellation of maintenance as a credible threat. Oracle’s SaaS strategy seeks to pull customers forward, but it has also undermined the value of its maintenance offering for its legacy on-premises products. Both factors increase customers’ negotiation leverage.

Oracle is determined to increase adoption of its SaaS products. In its last earnings call, Larry Ellison stated that ERP Cloud is one of two strategic imperatives for Oracle (Oracle’s autonomous database is the other). ERP Cloud comprises various finance, procurement, and governance products and is one of five pillars in its SaaS portfolio (the others are CX, HCN, SCM, and manufacturing). There are several good reasons why you should take a serious look at Oracle’s SaaS portfolio:



Software developers and their managers must change their perception of secure coding from being an optional feature to being a requirement that is factored into design from the beginning

Fifth in a continuing series about the human element in cybersecurity.

Programmers are responsible for developing and releasing new systems and applications, and subsequently announcing vulnerabilities and developing updates and patches as vulnerabilities and bugs are discovered. It can take organizations months to apply patches which creates a window of opportunity for hackers. What steps can programmers take to minimize security flaws, reduce impediments to the patching process, and shrink this window? 

Programmers — sometimes called software engineers, software developers, or coders — are the individuals who write code to build operating systems, applications, and software. They are also responsible for debugging programs and releasing patches to address code vulnerabilities after initial release. In this column, we consider programmers at commercial manufacturers and application/software providers, such as Microsoft or Adobe, and programmers responsible for custom internal applications.



Thursday, 04 April 2019 16:02

In Security, Programmers Aren't Perfect

Bidle TrevorBy TREVOR BIDLE, information security and compliance officer, US Signal

World Backup Day purposely falls the day before April Fool’s Day. The founders of the initiative, which takes place March 31, want to impress upon the public that the loss of data resulting from a failure to back up is no joke.

It’s surprising to find that nearly 30 percent of us have never backed up our data. Even more shocking are studies stating that only four in ten companies have a fully documented disaster recovery (DR) plan in place. Of those companies that have a plan, only 40 percent test it at least once a year.

Data has become an integral component of our personal and professional lives, from mission-critical business information to personal photos and videos. DR plans don’t have to be overly complicated. They just need to exist and be regularly tested to ensure they work as planned.

Ahead of World Backup Day, here are some of the key components to consider in a DR plan.

The Basics of Backup

A backup creates data copies at regular intervals that are saved to a hard drive, tape, disk or virtual tape library and stored offsite. If you lose your original data, you can retrieve copies of it. This is particularly useful if your data became corrupted at some point. You simply “roll back” to a copy of the data before it was corrupted.

Other than storage media costs, backup is relatively inexpensive. It may take time for your IT staff to retrieve and recover the data, however, so backup is usually reserved for data you can do without for 24 hours or more.  It doesn’t do much for ensuring continued operations.

Application performance can also be affected each time a backup is done. However, backup is a cost-effectives means of meeting certain compliance requirements and for granular recovery, such as recovering a single user’s emails from three years ago. It serves as a “safety net” for your data and has a distinct place in your DR plan.

You can opt for a third-party vendor to handle your backups. For maximum efficiency and security, companies that offer cloud-based backups many be preferable. Some allow you to backup data from any physical or virtual infrastructure, or Windows workstation, to their cloud service. You can then access your data any time, from anywhere. Some also offer backups as a managed service, handling everything from remediation of backup failures to system/file restores to source.

Stay Up-To-Date with Data Replication

Like backup, data replication copies and moves data to another location. The difference is that replication copies data in real- or near-real time, so you have a more up-to-date copy.

Replication is usually performed outside your operating system, in the cloud. Because a copy of all your mission-critical data is there, you can “failover” and migrate production seamlessly. There’s no need for wait for backup tapes to be pulled.

Replication costs more than backup, so it’s often reserved for mission-critical applications that must be up and running for operations to continue during any business interruption. That makes it a key component of a DR plan.

Keep in mind is that replication copies every change, even if the change resulted from an error or a virus. To access data before a change, the replication process must be combined with continuous data protection or another type of technology to create recovery points to roll back to if required. That’s one of the benefits of a Disaster Recovery as a Service (DRaaS) solution.

Planning for Disasters

DRaaS solutions offer benefits that make them an attractive option for integrating into a DR plan. By employing true continuous data protection, a DRaaS solution can offer a recovery point objective (RPO) of a few seconds. Applications can be recovered instantly and automatically — in some cases with a service level agreement (SLA) based RTO of minutes.

DRaaS solutions also use scalable infrastructure, allowing virtual access of assets with little or no hardware and software expenditures. This saves on software licenses and hardware. Because DRaaS solutions are managed by third parties, your internal IT resources are freed up for other initiatives. DRaaS platforms vary, so research your options to find the one that best meets your needs.

A DR plan is basically a data protection strategy, one that contains numerous components to help ensure the data your business needs is there when it is needed — even if a manmade or natural disaster strikes.

Trevor Bidle is information security and compliance officer for US Signal, the leading end-to-end solutions provider, since October 2015. Previously, Bidle was the vice president of engineering at US Signal. Bidle is a certified information systems auditor and is completing his Masters in Cybersecurity Policy and Compliance at The George Washington University.

ERAU students generate forecasts with eye-catching daily graphics

Embry-Riddle Aeronautic University (ERAU) decided to amp up its broadcast meteorology classes with professional weather graphics and precision storm tracking tools that can be used to illustrate complex weather conditions and explain weather concepts to students. The customizable graphics platform enables the university to incorporate a range of other available weather data and create graphics that work well in the classroom environment. Providing daily weather graphics every day, including holidays, helps the university tell the most important national and regional weather story of the day. Expanding the tools student forecasters have on hand, the weather platform provides exceptional analysis and learning opportunities.

First used for broadcast meteorology classes, the new graphic system is now being used for weather analysis and forecasting, aviation weather, and tropical meteorology classes. ERAU continues to expand its use to create more content for the website and as a teaching tool for student pilots and a variety of other situations. And students are sitting up and taking notice. Enrollment in broadcast meteorology classes has more than doubled since they began using the new tools.

Explanations work better with good graphics

Robert Eicher, Assistant Professor of Meteorology, was searching around for a high quality instructional weather analysis and graphics system for his broadcast meteorology class. Before coming to ERAU, Eicher had worked as a television weather broadcaster for two decades. He knew the power of good graphics in explaining weather to audiences and was looking to extend that to his students.

“Lectures are usually accompanied by PowerPoint presentations with a lot of words,” Eicher explains. “As they say, a picture is worth a thousand words – it is easier to explain what’s going on if you have a good graphic. And animated graphics go a lot farther for illustrating what we are teaching about weather.”

Professor Eicher began shopping around for a weather analysis system that would fit into an instructional environment. After looking at available options, he eventually opted for Baron Lynx™, which combines weather graphics, weather analysis and storm tracking into a single platform. He had familiarity with Baron weather products, having used them at television stations in Orlando, Florida and Charlotte, North Carolina.

The Lynx platform includes several components. One area is dedicated to weather analysis, where students analyze the weather data cross the continental United States. Another area enables students to assemble and prepare the weather show and deliver it during a weather cast. The third is a creative component dedicated to weather graphics, and allows students to generate new weather graphics using existing graphical elements or by creating entirely new artwork.

Lynx was developed with the direct input of more than 70 broadcast professionals, including meteorologists and news directors. When introduced in 2016, Lynx garnered rave reviews for telling captivating weather stories and dominating station-defining moments. TV stations liked that Lynx offered them a scalable architecture that they could configure specifically to their own needs. With that came an arsenal of tools, including wall interaction, instant social media posting, forecast editing, daily graphics, and of course storm analysis. Integration across all platforms – on-air, online, and mobile – was another big plus for weather news professionals.

For Professor Eicher, the two deciding factors in favor of selecting Lynx were value for the money and customizability. “Compared to other options I looked at, you get a lot more for your money – a bigger bang for the buck. I also liked the customizability, which works well for our unique situation. As a university, we are already getting a ton of data from an existing National Oceanic and Atmospheric Administration (NOAA) data port. I like that Lynx allows us to incorporate the data we are getting and make good graphics with it. We can get in and tinker around and do some innovative things for the classroom environment.”

One unique example involved teaching aviation school students about the potential for icing. Eicher went into Lynx and adjusted contours at an atmospheric air pressure of 700 millibars (at 10,000 feet) to show only the 32 degree line, so the students could see where the freezing level was at 10,000 feet. He then adjusted the contours of relative humidity that were 75 percent and above. The result illustrated where the temperature and humidity combined to produce ice, showing the icing potential at that flying level. “It is a unique graphic that I don’t think anyone else has,” noted Eicher.

Baron4 3 1The program is being used for weather analysis and forecast and also enables broadcast meteorology students to publish their forecasts and make them visible to people outside the classroom. “In the past, students would have written their forecasts and only their professor would see it,” said Professor Eicher. “Now the class has a clear purpose. Student meteorologists use Lynx to prepare weather analyses and forecasts and publish the results to the ERAU website using the Baron Digital Content Manager (DCM) portal.” 

While not a part of Lynx, the DCM is a web portal that communicates with Lynx. Using the DCM, meteorologists can update forecasts remotely and publish them across mobile platforms and websites. It is accessible to anyone who has credentials: students can log in from their home, lab, or class and enter the data. The DCM forecast builder feature allows users to populate their forecast, select weather graphics associated with specific forecast conditions using a spreadsheet-like form for the data entry, and publish them to the ERAU website. The forecast graphics and the resulting format are predefined during system setup.

On weekends, holiday breaks, or summer vacation, the DCM can be set to revert to the National Weather Service (NWS) forecast, solving the problem of what to do if students are not there to issue a forecast. Eicher considers this a feature that would be extremely useful for any university, because it means a current forecast will always appear on the website. According to Professor Eicher, “The ability to update the forecast via our web portal provided a solution for a need that had been unmet for five years or more.”

Baron4 3 2

Teaching Assistant Michelle Hughes uses Lynx to prepare weather analyses and forecasts and publish to the ERAU website.

In general, Eicher has found a lack of good real time weather instructional material, so he has turned to the Lynx program to develop better teaching tools. In addition to the original broadcast meteorology course, he and other instructors are also using the program for aviation weather and tropical meteorology classes. He anticipates it will soon be used to develop instructional graphics for an introduction to meteorology course. For example, Lynx will allow instructors to move beyond just a still image of information on upper level winds that show current wind patterns and then animate the winds with moving arrows. This type of animation clearly illustrates conditions and highlights areas where attention should be focused.

Baron4 3 3

ERAU is also using the program to develop other high quality instructional materials, including animated graphics that can be used to explain important regional and national weather events, for example, the recent California wildfires.

Positive feedback for new teaching tool

ERAU faculty and administration are extremely pleased with the availability of the new teaching tool for broadcast and meteorology students, and student pilots. Located in a broadcast studio that is part of the meteorology computer lab, Baron Lynx is accessible to the entire meteorology faculty and students, with output connected to adjacent classrooms. Enrollment in broadcast meteorology classes has more than doubled since ERAU obtained these new tools.

Support and training on the product have been provided at a high level. The Baron technical support staff is used to supporting televisions stations 24/7/365, so were not thrown off by students calling them on a Saturday afternoon with questions on how to produce graphics for their forecasts. The students showed off their new knowledge on a live Facebook stream the day before Thanksgiving on travel weather.

Eicher also gave high grades to the staff training provided. “The staff person brought in to train me on use of the program actually assisted with teaching the broadcast meteorology class, showing the students how to use the program directly.”

Customizable graphics product ideal for classroom environment

The customizable Lynx product enables the university to incorporate a range of other available weather data and create graphics ideal for the classroom environment.

The university is also looking into developing a range of other graphics for use on their new website, as well as creating more content using Lynx for educational purposes. Also in the planning stages is consideration of hooking in other camera sources like a roof/sky camera into the Lynx program, combined with weather data. “Word is getting out that we have a pretty unique opportunity,” concludes Professor Eicher.

Breaking up is hard to do.  Those are not my words.  They were said, or sang by a much more talented guy named Neal Sedaka.  He sang those lyrics back in 1976, but they are still true today.  Breaking up is hard to do.  You can watch a performance of the song here.  (Watching this video could result in physical distress for viewers born after the year 1970.)  Staying in a relationship is easier than breaking up even if staying is unhealthy.  One of the main reasons people stay in broken relationships is that however unpleasant and unhealthy the situation is, change can be even more difficult.   Clinical Psychologist, Dr. Samantha Rodman wrote the following in a recent blog post for a website called TalkSpace:

“A common example of fear of change is when a person stays in an unfulfilling romantic relationship because they are terrified of being single, or of the effort and risk involved in trying to find a different partner. People often coast along in unfulfilling relationships, even marrying a person about whom they feel ambivalent, just because they are so scared at the prospect of breaking up.”  https://www.talkspace.com/blog/fear-of-change-why-life-adjustments-are-difficult/

Change is hard to do.  Change is at the heart of what is hard about breaking up for many people.  Change is especially tough when you are in love with Microsoft Office.



Wednesday, 03 April 2019 14:49

Breaking Up is Hard to Do

Nearly 20 years ago, I had the humbling privilege to be assigned as the donations manager for the state of New York following the Sept. 11, 2001, attacks on the World Trade Center.

I deployed from California to the New York State Emergency Operations Center in Albany via the interstate Emergency Management Assistance Compact. It was a cold, dreary first day in upstate New York. I entered Highway Patrol Headquarters, proceeded past the blast doors, and down into the Cold War fallout shelter in the basement. There was a buzz of subdued, chaotic efficiency. The New York State Emergency Operations Center was in full activation.

The State Emergency Management Office’s (SEMO) deputy director covered the initial ramp-up to the “second disaster,” a flood of well-intentioned but not always useful in-kind donations. Within the first few days of the disaster, well-meaning people and organizations sent truckloads of what donations managers call “stuff”. Stuff was piling up all over the streets of New York City and around ground zero. There were literally piles of stuff clogging the streets, impeding access to the disaster site, and getting in the way of first responders’ ability to respond.



Architects can, and do, choose a primary cloud service provider and/or Hadoop system to house their data. Moving, transforming, cataloging, and governing data is a different story, so architects come to me after throwing up their arms searching for solutions to tame the information fabric, thinking they must be missing something: “Isn’t there a single platform?” they ask.

Sadly, no. There are only best-of-breed tools or data management platforms in transition.

There’s history behind this. Data management middleware companies tend to be relatively small. Information management vendors such as IBM, Oracle, and SAP pick off smaller data management vendors and add their offerings as solutions to their overall platform portfolio to sell as enablers of their big data and cloud systems. Small vendors don’t have the funds to preemptively build capabilities as markets shift toward new architectures like big data and cloud. Big vendors solve the 80% rule of firms running their businesses on traditional reliable technology. Thus, data management and governance have lagged behind the big data and cloud trends. Ultimately, both vendors have had a wait-and-see strategy, building capabilities and rearchitecting solutions only when customers began to show higher levels of interest (it’s in the RFI/RFP).



(TNS) — Gov. Gretchen Whitmer denied declaring a state of emergency for Shiawassee County last week.

The damage done to the county after tornadoes ripped through its villages and towns on March 14 didn’t meet the state threshold for declaring a State of Emergency, according to Shiawassee County Commission Chair Jeremy Root.

In total 135 structures, 94 homes, four businesses, 16 barns and 22 RVs were damaged or destroyed in the wake of the storm. Approximately $10 million in damage was done to homes and businesses, Shiawassee Emergency Management Director Trent Atkins stated.

Three goats and a chicken were killed, but no people were injured or killed by the tornadoes, Atkins said.



The lines between agencies, consultancies, and tech services firms are continuing to blur. This convergence is driven in part by an acquisition-heavy strategy. Like in 2017, the last year of acquisitions saw cloud and agency capabilities as most in demand. But what does this mean for buyers? Your go-to boutique agency may (soon) be part of a larger firm. Or your managed services partner likely has a whole new set of intelligent solutions that weren’t even ideas yet when you signed the contract.

Services firms have been quick to buy to fill gaps in skill sets, solutions, or customer lists. SAP acquires CallidusCloud? Time for a new commerce specialist acquisition. Struggling with AI for NLP? There’s a startup for that. Still haven’t managed to convince the market that you can develop a modern digital strategy on top of implementation work? A midsized, industry-oriented consultancy might just get your foot in the door.

In 2018 alone, the services partners we track made over 100 acquisitions. In a recently published infographic, we break down the top trends in acquisitions and what that means for services buyers. Here’s a quick snapshot:



Wednesday, 03 April 2019 14:39

If You Can’t Beat ‘Em, Buy ‘Em

(TNS) — California’s hospitals are scrambling to retrofit their buildings before “The Big One” hits, an effort that will cost tens of billions dollars and could jeopardize health-care access, according to a newly released study.

The state’s 418 hospitals have a deadline from the state, too. They’re racing to meet seismic safety standards set by a California law that was inspired by the deadly 1994 Northridge earthquake, which damaged 11 hospitals and forced evacuations at eight of them.

By 2020, hospitals must reduce the risk of collapse. By 2030, they must be able to remain in operation after a major earthquake.

That could cost hospitals between $34 billion and $143 billion, according to a new report from Rand Corp.



The definition of emergency is “a serious, unexpected, and often dangerous situation requiring immediate action.” The key word here is “unexpected.” You can’t predict emergencies–but you can still plan for them if you understand your most likely threats. One crucial part of this planning process is creating emergency notification message templates. After all, even if you don’t know the exact nature or time of the next threat, you can be sure that you will be communicating with your employees. Having emergency notification message templates saves you precious time and bandwidth which you can allocate to more pressing needs.

The same goes for any emergency response strategy. Not every situation is predictable, but it’s wise to assess your current risks and make plans on how you would respond. That plan starts with message templates. In this post we will talk about the four most important types of emergency notification message templates–and even give you access to a few templates that we have built.



Last month, my colleague Dave Johnson and I published a report that shared a better way for companies to measure the quality of their employee experience. The Employee Experience Index rests on years of research by Dave and myself but also incorporates findings from academic studies that update what we know about what makes a great employee experience.

We now have two years of data back, and it’s clear what factors matter most to employees about their experiences working for a company. Companies must empower, inspire, and enable their employees. Think of factors like granting employees freedom to decide how to do their jobs, or inspiring belief among employees in the core mission and values of the company, or that the IT department helps them be productive. It turns out that these are some of the most important elements of an employee experience to get right.



Today’s sophisticated and ever-changing technologies have made the world smaller and opened up new ways of communicating. The publication of a revised International Standard ensures that we are all “talking” the same language when it comes to date and time.

We don’t take kindly to our sleep being disrupted in the wee hours by a selfie from friends sharing their latest updates from their holiday on a far-flung beach resort. But when it comes to doing business in today’s hyperconnected world, late-night grumpiness can leave you with serious egg on your face.

From making sure your online calendar is in sync for virtual meetings with colleagues in other time zones, to scheduling video conference calls, not to mention turning up for face-to-face meetings on the right day after a long-haul flight, if you want to be taken seriously in a highly competitive world, it is not acceptable to get the date and the time wrong.  



It goes without saying that backing up data is one of the most important things a business can do, especially considering how data is now essentially the lifeblood of an organization. With this in mind, five IT industry professionals give their advice as to how business continuity professionals can keep up with the ever-evolving world of backup...

The era of ‘always-on’

In today’s business landscape, being ‘always-on’ is an essential. It can be demanding on an organization, especially when the pressure of having the most up-to-date backup technology is ever-present. As Rob Strechay, Senior Vice President of Product at Zerto comments, “From tape, to hard drive and now cloud, which is really tape in many cases, the target and management has changed, yet fundamentally it is still based on periodic snapshots of information. But in an ‘always-on’ business landscape, how can an organization feel protected with an antiquated backup strategy? The answer is it can’t.”



Monday, 01 April 2019 15:00

Backup – is your strategy evolving?

Compliance professionals still “own” too many risks that business units could manage more effectively. Gartner’s Brian Lee discusses one solution: moving ownership of compliance risks closer to their sources.

It’s a time of enormous change for organizations of every type. Gartner’s 2018 survey of CEOs shows that CEOs, who have been focused on growth for years, are now prioritizing firm plans to deliver it — plans that involve IT-related transformation and new corporate structures and cultures.

Over half the CEOs say their organizations are actively engaged in strategic digital transformation efforts. This development has greatly expanded the list of responsibilities (which often require technical expertise) for compliance professionals at a time when there is a notable talent shortage in key areas.



(TNS) - As Des Moines County goes through what is predicted to be a particularly long flood season, Des Moines County Emergency Management is reminding everyone to be safe.

"Do not go in the water," says emergency management director Gina Hardin.

Hardin said in years past she has seen children playing in flood waters in flooded parking lot. While playing in the river may be fine when the river is normal, playing in elevated water can be dangerous for a number of reasons.

For starters, the river can moves fast while it is flooded. According to the National Weather Service, 6 inches of fast moving water is enough to knock an adult over.



The digital revolution is transforming our world. Protiviti’s Jim DeLoach shares how, over the next few years, many organizations will need to undertake radical change programs and – in some cases – completely reinvent themselves to remain relevant and competitive.

Is disruptive innovation sufficiently emphasized on the board agenda and in the C-suite?

Ask executives and directors what their company’s biggest threats are and, chances are, their answer will include the threat of disruptive innovation. As our latest global top risks survey indicates, many leaders are concerned about whether their existing operations and legacy IT infrastructure are able to meet performance expectations related to quality, time to market, cost, innovation and competitors – especially new competitors – that are “born digital,” with a low-cost base for their operations. Additionally, the rapid speed of disruptive innovation and new technologies and resistance to adapting operations in the face of indisputable change rank high on the list of top risks.



Don't you hate it when one loud co-worker at the office takes all the credit and keeps the rest of the team out of management's eye? Welcome to the world of Internet of Things (IoT) malware, where several families do their malicious worst — only to hear IT professionals droning on about Mirai, Mirai, Mirai.

Don't be misled: Mirai is still out there recruiting low-power IoT devices into botnets, but it's certainly not the only piece of malware you should be aware of. Mirai wasn't even the first of the big-name IoT baddies — that distinction goes to Stuxnet — but the sheer size of the attacks launched using the Mirai botnet and the malware's dogged persistence on devices around the world have made it the anti-hero poster child of IoT security.

Mirai has continued to grow through variations that make it a malware family rather than a single stream of malware. And it's not alone: Malware programmers are much like their legitimate software development counterparts in their programming practices and disciplines, making code reuse and modular development commonplace. Each of these can make it tricky to say whether a bit of malware is new or just a variant. Regardless, security professionals have to stop all of them.



(TNS) — Columbine High School, Sandy Hook Elementary School, Las Vegas and Sutherland Springs.

These are just a small fraction of the number of mass shooting events seen at schools, churches and businesses that have made headlines over the past couple of years.

One local retired teacher wants to try to put a stop to these events.

While teaching in the classroom for 24 years at Cleburne ISD, Jackie Beatty said parents never had to worry about sending their children to school. That is not the case nowadays, she said.

She is encouraging local school districts, churches, law enforcement agencies and businesses to purchase the Safe Zone Gunfire Protection technology, which uses cloud-based machine learning to detect gunfire in a building.



Once you’ve identified the risks facing your organization, you need to consciously select a risk mitigation strategy for each one. In today’s post, we’ll explain the four possible strategies and share some tips to help you choose between them.


So you’ve completed a threat and risk assessment (TRA). Excellent. You now have a good idea of the main threats your organization faces, the likelihood that each will occur, and an estimate of the consequences to the organization if each did occur. (For more on TRAs, see this recent post.)

What do you do next?



Large donations by companies and family foundations provide the cornerstone for many prominent nonprofit organizations. But when those donations become shrouded in negative publicity, recipients must weigh their value against the damage to the organization’s own reputation.

A case in point is the wealthy and philanthropic Sackler family of Purdue Pharma, the maker of OxyContin. The recent deluge of opioid lawsuits is forcing a widespread reevaluation.

Several museums, including the Met’s Temple of Dendur, have been the targets of public protests with supposed overdose victims splayed on the ground surrounded by pill bottles and opioid prescriptions.

When public perception is at stake, does it matter whether the money came from illegal or controversial endeavors? Or is it just guilt by association? Either way, the optics are terrible.

How should an organization respond?



Thursday, 28 March 2019 19:56

When Donations Come Back to Haunt You

By Lynne McChristian, I.I.I. Non-resident Scholar and Media Spokesperson

Ah, spring! The season of renewal, of fresh beginnings, of flowers in bloom – and of fresh batteries in the smoke alarm. Yes, you probably overlooked that last item, so here’s a reminder to put it on the spring to-do list.

Checking (and changing) the batteries in the smoke alarm is a good springtime habit. Most homes have a smoke alarm, but if you don’t check it with regularity,  you can’t be sure it’s working. It is one of those out-of-sight, out-of-mind things, so here’s a reminder to put your home or business smoke alarm top of mind.

According to the National Fire Protection Association (NFPA), almost three of every five home fire deaths resulted from fires in homes with no smoke alarms or in homes where the smoke alarm was not working. NFPA also points to missing or disconnected batteries as the reason for inoperable smoke alarms. Dead batteries cause 25 percent of smoke alarm failures.



Many new business continuity programs start strong then slow to a crawl, sacrificing the benefits of getting up and running quickly. In today’s post, we’ll share some tips on how you can get off the blocks fast and sprint through the finish, getting your program going in twenty-four months or less.

Unfortunately, we see time and time again where BC programs get off to a strong start, with new people coming in with a lot of enthusiasm. But then for various reasons, they get bogged down. In such programs, even the biggest gaps never get covered and life is a never-ending slog.

It’s so much better if a program gets off to a strong start and then runs swift and true all the way to the finish line—defined as a program that is comprehensive, executable, and maintainable.



It was a balmy 67-degree day in New York on March 15, which prompted the inevitable joke that since it’s warm outside, then climate change must be real. The wry comment was made by one of the speakers at the New York Academy of Science’s symposium Science for decision making in a warmer word: 10 years of the NPCC.

The NPCC is the New York City Panel on Climate Change, an independent body of scientists that advises the city on climate risks and resiliency. The symposium coincided with the release of the NPCC’s 2019 report, which found that in the New York City area extreme weather events are becoming more pronounced, high temperatures in summer are rising, and heavy downpours are increasing.

“The report tracks increasing risks for the city and region due to climate change,” says Cynthia Rosenzweig, co-chair of the NPCC and senior research scientist at Columbia University’s Earth Institute. “It continues to lay the science foundation for development of flexible adaptation pathways for changing climate conditions.”



Thursday, 28 March 2019 19:52


The use of the Federal Emergency Management Agency’s (FEMA) Integrated Public Alert and Warning System (IPAWS) is continually growing among state and local jurisdictions across the U.S.

Now that IPAWS has many success stories attributed to its use, public safety officials are getting a better sense of just how effective this tool can be. The number of applications for Collaborative Operating Group (COG) approvals is increasing; in some states, 80-90% of county emergency management agencies are now IPAWS Alerting Authorities.

Even with such promising results, many public safety officials are still unclear about how effective IPAWS can be when used in combination with their existing mass notification systems. Although professional discretion is afforded through the FEMA-IPAWS Memorandum of Agreement (MOA), some agencies are still uncomfortable determining what should be considered an “imminent threat” worthy of initiating a Wireless Emergency Alerts (WEA) alert.



In today’s world where the technology of road vehicles is moving ahead at racing pace, it is important that these exciting new electronic features are safe. A series of International Standards for functional safety of electrical and electronic systems in road vehicles has just been updated to keep the automotive industry ahead of the pack.

Cars have come a long way from the days of internal combustion engines a century ago, or even manual wind-down windows. These days, it seems, everything is done by the touch of a button or through a simple voice command. Electronics are behind a mind-boggling array of vehicle functionalities and the technology just keeps on coming.

But with any powerful technology comes a set of risks. The purpose of the ISO 26262 series of standards is to mitigate those risks by providing guidelines and requirements for the functional safety of electrical and electronic systems in today’s road vehicles.



An emergency notification system empowers organizations to keep their people safe, informed, and connected through relevant, streamlined notifications during a critical event. Emergency notification systems automate and deliver messages so you can quickly and easily communicate with, or engage, your audience from anywhere, at any time, using any device. Your emergency notification system should monitor threats for you, assist you in identifying who might be impacted by a threat so you can effectively communicate, and ultimately help you improve outcomes.

Your emergency notification system should be incredibly user-friendly. Similarly, the process of understanding your vendor and how you would partner together should be just as easy.

From demo to implementation, the process should be painless. When evaluating emergency notification systems vendors and to ensure your success, it’s important to understand what you can expect from your partnership.



In January, BlackRock accidentally leaked confidential sales data by posting spreadsheets unsecurely online – certainly not the first time we’ve seen sensitive information “escape” an organization. Incisive CEO Diane Robinette provides guidance companies can follow to minimize spreadsheet risk.

Several weeks ago, the world’s largest asset manager, BlackRock, accidentally posted a link to spreadsheets containing confidential information about thousands of the firm’s financial advisor clients. As reported by Bloomberg News, the link was inadvertently posted on the company’s web pages dedicated to BlackRock’s iShares exchange-traded funds. Included in these spreadsheets was a categorized list of advisors broken into groups identified as “dabblers” and “power users.”

While BlackRock was lucky in the fact that there was no financial information included on these spreadsheets, they are still left to deal with reputational damage. For the rest of us, this breach brings an important issue — spreadsheet risk management — back into the spotlight.

Despite years of rumors predicting the demise of spreadsheets, they are still widely used by businesses of every size. And why shouldn’t they be? Beyond providing an easy way to categorize clients and business partners, spreadsheets continue to meet the analytical needs of finance and business executives. They are especially useful for analyzing and providing evidentiary support for decision-making and for complex calculations where data is continuously changing. Yet, as we’ve seen time and time again, spreadsheets represent continued exposure to risk.



Wednesday, 20 March 2019 15:37

Lessons from BlackRock’s Data Leak

The sharp decline follows an FBI takedown of so-called "booter," or DDoS-for-hire, websites in December 2018.

The average distributed denial-of-service (DDoS) attack size shrunk 85% in the fourth quarter of 2018 following an FBI takedown of "booter," or DDoS-for-hire, websites, in December 2018, researchers report.

Late last year, United States authorities seized 15 popular domains as part of an international crackdown on booter sites. Cybercriminals can use booter websites (also known as "stresser" websites) to pay to launch DDoS attacks against specific targets and take them offline. Booter sites open the door for lesser-skilled attackers to launch devastating threats against victim websites.

About a year before the takedown, the FBI issued an advisory detailing how booter services can drive the scale and frequency of DDoS attacks. These services, advertised in Dark Web forums and marketplaces, can be used to legitimately test network resilience but also make it easy for cyberattackers to launch DDoS attacks against an existing network of infected devices.



Wednesday, 20 March 2019 15:35

DDoS Attack Size Drops 85% in Q4 2018

The #MeToo and #TimesUp movements brought the continuing problem of workplace misconduct onto the national stage, shining a light not only on the prevalence of harassment, but also on the dire need for effective processes to investigate when allegations are made. Clouse Brown Partner Alyson Brown discusses.

Confidential information
It’s in a diary
This is my investigation
It’s not a public inquiry.

— “Private Investigations,” Mark Knopfler/Dire Straits

It’s Friday. Thoughts are turning to the weekend ahead. The phone rings: We have a problem — I’ve gotten a complaint of sexual harassment against a senior VP. What do I do?

I’ve had variations of this call dozens of times. In the months since #MeToo and #TimesUp grabbed national headlines, the volume of calls about workplace complaints, especially those involving senior executives, has skyrocketed.

Employers and executives must act promptly when faced with these complaints. An effective workplace investigation can mean the difference between effective resolution and unwanted litigation. Moreover, in the current business environment, how employers investigate potential misconduct can affect that company’s reputation almost as much as the alleged conduct itself.

Consistent principles and procedures must be followed whenever allegations of misconduct are investigated. While volumes are written on how to ask questions and read body language, less guidance is available on the necessary pre-planning necessary for an effective investigation.



The automation, stability of infrastructure, and inherent traceability of DevOps tools and processes offer a ton of security and compliance upsides for mature DevOps organizations.

According to a new survey of over 5,500 IT practitioners around the world, conducted by Sonatype, "elite" DevOps organizations with mature practices, such as continuous integration and continuous delivery of software, are most likely to fold security into their processes and tooling for a true DevSecOps approach.

Throughout the "DevSecOps Community Survey 2019," responses show that mature DevOps organizations have an increasing awareness of the importance of security in rapid delivery of software and the advantages that DevOps affords them in getting security integrated into their software development life cycle.



To make sure that homeowners are aware of the importance of flood insurance, the I.I.I. recently partnered with the Weather Channel.

A video posted to the Weather Channel’s Facebook page demonstrates just how destructive flooding can be; for example, in the video you can see the devastation from Hurricane Sandy wreaked on Breezy Point, a coastal community in Queens NY.

“What’s remarkable about flood insurance is that only 12 percent of people have it,” says Sean Kevelighan, I.I.I.’s CEO. One misconception that people have about flood insurance is that it’s included in a homeowners policy. But that’s not the case. A separate flood policy must be obtained. Flood insurance is mostly sold by FEMA’s National Flood Insurance Program, but some private insurers have begun offering it as well.



The latest twist in the Equifax breach has serious implications for organizations.

When the Equifax breach — one of the largest breaches of all time — went public nearly a year-and-a-half ago, it was widely assumed that the data had been stolen for nefarious financial purposes. But as the resulting frenzy of consumer credit freezes and monitoring programs spread, investigators who were tracking the breach behind the scenes made an interesting discovery.

The data had up and vanished.

This was surprising because if the data had, in fact, been stolen with the ultimate goal of committing financial fraud, experts would have expected it to be sold on the Dark Web. At the very least, they would have expected to see a wave of fraudulent credit transactions.




Wednesday, 20 March 2019 15:30

The Case of the Missing Data

(TNS) — Somerset County, Pa., will test its CodeRED emergency public mass notification system at 3 p.m. Tuesday, according to the county’s top emergency management official.

Joel Landis, director of the Somerset County Department of Emergency Services, said on Saturday that he urged business owners and members of the public to sign up prior to the test for the service, which is used to send notifications about emergency situations in the county by phone, email, text message and social media.

Landis said in an email that the “CodeRED system provides Somerset County public safety officials the ability to quickly deliver emergency messages to your landline or cell phone to targeted areas or the entire county.”

The CodeRED system is used to distribute information about emergencies such as evacuation notices, utility outages, water main breaks, fires, floods and chemical spills, according to information on Somerset County’s website.



A side-by-side comparison of key test features and when best to apply them based on the constraints within your budget and environment.

Crowdsourced security has recently moved into the mainstream, displacing traditional penetration-testing companies from what once was a lucrative niche space. While several companies have pioneered their own programs (Google, Yahoo, Mozilla, and Facebook), Bugcrowd and HackerOne now carve up the lion's share of what is a fast-growing market.

How does crowdsourced pen testing compare with traditional pen testing, and how does it differ in methodology? Does this disruptive approach actually make things better? Read on for a side-by-side comparison...



Wednesday, 20 March 2019 15:27

Crowdsourced vs. Traditional Pen Testing

While every tech vendor seems to lay claim to being an expert in digital transformation, it stands to reason that not all can be. For sure, there are many vendors with experience helping clients create new customer or employee digital experiences, but this experience doesn’t make them experts in digital business transformation.

For 20 years, Forrester has been extolling the virtues of improving customer experience – we’ve even proven the value of delivering world-class experiences, including digital experiences.

And over these years, many of our clients have successfully mapped customer journeys and improved touchpoints, all the while seeing gradual improvements in their Customer Experience Index (CX Index™) scores.

But what happens when everyone’s customer journeys are optimized and when all digital experiences begin to look similar? As customer expectations rise, you must invest to improve touchpoints just to remain competitive. Without a major shift in how your leadership thinks about digital, your firm will struggle to break out from the pack.



The Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (the Banking Royal Commission, or BRC) has been in Australian media headlines since the Commission was established on December 14, 2017. On February 4, 2019, the widely anticipated final report from Commissioner Hayne was released.

While Australian banks were BRC’s focus, international institutions watched with keen interest and made submissions to ensure their voices were heard, anticipating that the resulting regulations for financial institutions would be far stricter and structured.

However, the impact is not limited to the financial sector. Commissioner Hayne recommended a change to the regulators’ enforcement approach, which may transform the perceived soft touch of the country’s principal corporate watchdog, the Australian Securities and Investments Commission (ASIC).

For overseas companies operating in Australia, these changes may impact future engagements with the Australian regulator and the prospects of global settlements where multiple regulators are involved.



When I started my career in marketing analytics almost 20 years ago, the biggest challenge was wrangling first- and third-party data, joining them together, and analyzing customer patterns. It was like mining for gold; we wanted to discover something unique about our customers, a nugget that our marketing counterparts could use to craft customized messages or target more effectively. It took a lot of time (this was before the ad- and martech boom), but it was fun spending hours programming and running models to understand customer behaviors.

Well, it was fun for me. My colleagues may not agree.

So when I was asked to take over data management platform coverage, I geeked out in excitement. It was my time to learn more about how data-specific technologies automate the mundane tasks that I had to do years ago, and with new, quickly changing data sources.



(TNS) — Efforts are underway to help residents in recovery mode after four tornadoes left behind a path of damage across two Michigan counties.

The National Weather Service confirmed a pair of tornadoes touched down in Shiawassee County and two in Genesee County that damaged homes, barns, splintered trees and downed power lines leaving thousands in the dark.

An informational meeting is set for 3 p.m. Sunday, March 17 in the cafeteria at Durand High School, 9575 E. Monroe Drive, with emergency management and government officials to address items such as recovery efforts, resident/business resources for relief, and short/long-term housing needs.

Shiawassee County Sheriff Brian BeGole confirmed a local state of emergency has been declared after 61 homes were damaged — 20 deemed uninhabitable or destroyed — as well as 16 barns and two businesses by the tornadoes, including an EF-2 with winds up to 125 mph from Newburg Rd/Bancroft Rd to M-71 just to the southeast of Vernon.



The stakes are getting higher for CROs and compliance officers. Brenda Boultwood of MetricStream details why it’s increasingly imperative that risk and compliance professionals work hand in hand to address ongoing risks and strengthen organizational GRC efforts.

While risk and compliance functions have run on parallel tracks for years, 2019 is likely to witness a new level of synergy between the two groups as they collectively seek to help their organizations drive performance while preserving integrity.

Partnering in this effort will be the Chief Risk Officer (CRO) who, by virtue of his or her bird’s-eye view of organizational processes and hierarchies, is well-positioned to understand how compliance ties back to risk, where key issues or concerns might lie and how risk frameworks can be integrated with compliance to optimize value.

Some large banks have organizationally integrated their operational risk management functions with their regulatory compliance functions (or are in the process of doing so), but this is less important than understanding the synergies.

With that in mind, here are four specific areas where I believe the CRO can impact compliance in 2019:



(TNS) - Across West Virginia at about 10:30 a.m. on Tuesday, sirens will blare, weather alert radios will activate and test emergency broadcast messages will interrupt television and radio programming as a statewide tornado test alert begins.

Federal, state and county emergency officials urge West Virginia families, businesses, hospitals, nursing homes, schools and government agencies to use the test alert to simulate what actions would be taken in the event of a real tornado emergency, and to update emergency plans as needed.

“Testing your emergency plan, whether with family members or co-workers, helps ensure we will all be ready for the next severe weather event in the state,” said Michael Todorovich, director of the West Virginia Division of Homeland Security and Emergency Management.

“This is the time to work through your emergency plans and to ensure you know what to do if an actual tornado occurs in Kanawha County,” said Kanawha County Commission President Kent Carper.

In the event of a real tornado warning, families are advised to gather in the basements of their homes, or in small, interior rooms with no windows on the home’s lowest level, until the warning ends. If traveling in vehicles when a tornado warning is issued, avoid parking below overpasses or bridges and choose a low, flat site to wait out the warning.



(TNS) - More practical — and perhaps more stylish — than the latest fashion handbag, a bright red emergency preparedness "go bag" distributed by the Department of Homeland Security might be even harder to land than next season's Fendi.

These red backpacks containing items from packets of water to hand-cranked radios are limited in distribution to senior citizens and people with disabilities who attend emergency preparedness training workshops, such as the one put on Wednesday afternoon at the office of the Cape Organization for Rights of the Disabled on Bassett Lane.

But while not everyone can get their hands on one of the DHS go bags, every adult on Cape Cod can learn to develop a response for dealing with natural disasters and other emergencies, said Barnstable police Lt. John Murphy, who attended Wednesday's program with Barnstable Police Sgt. Thomas Twomey.

"The most important thing is the preparedness part," Murphy said. "Get the message out. That is the goal of these types of programs."



Monday, 18 March 2019 15:32

Prepared for Disaster in Cape Cod

Gone are the days when the workplace was built around a fairly straightforward structure, consisting of employer, employee, customer. The winds of technological change may be sweeping away traditional models, but ISO 27501 is helping managers build a more sustainable one for the future.

From the advent of the Internet to what is now known as the Fourth Industrial Revolution, the latest cutting-edge technologies – among them robotics, artificial intelligence (AI), the Internet of Things – are fundamentally changing how we live, work and relate to each other. The issue for business in this new era is not so much about the bottom line, or even just corporate social responsibility, it is also about taking a human-centred approach to the future of work and finding the right tools to ensure that organizations are successful and sustainable.

The likes of AI are presenting a great opportunity to help everyone – leaders, policy makers and people from all income groups and countries – to lead more enriching and rewarding lives, but they are also posing challenges for how to harness these technologies to create an inclusive, human-centred future.

ISO 27501:2019The human-centred organization – Guidance for managers, can help organizations to meet these challenges. In this brave new world, organizations will not only have an impact on their customers but also on other stakeholders, including employees, their families and the wider community.



Geary Sikich explains why he believes that Brexit is a Black Swan event and describes various issues that enterprise risk managers should consider when assessing and managing Brexit risks.

In his book, ‘The Black Swan: The Impact of the Highly Improbable’, Nassim Taleb defines a Black Swan in the Prologue on pages xvii – xviii, xix, xx – xxi, xxv, xxvii.  I quote a few (what I consider) key points:

xvii: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes:

First is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility.

Second, it carries extreme impact.

Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

xxv: “The Platonic fold is the explosive boundary where the Platonic mindset enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide.  It is here where the Black Swan is produced.”

xxvii: “To summarize: in this (personal) essay, I stick my neck out and make a claim, against many of our habits of thought, that our world is dominated by the extreme, the unknown, and the very improbable (improbable according to our current knowledge)…”

To summarize:

A Black Swan is a highly improbable event with three principal characteristics: it is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was.

Taleb continues by recognizing what he terms the problem: “Lack of knowledge when it comes to rare events with serious consequences.”



Lesley Maea suggests compliance today could take a cue from Marie Kondo in her Netflix hit, “Tidying Up.” To remain safe and secure, use an intranet as a single source of truth. Yes, you read that right: an intranet.

Put everything in one place. Then, you can see what you have and get rid of what you don’t need. That’s one of the organization methods Marie Kondo uses in her Netflix hit, “Tidying Up.”

Our organizational lives are like a lot of Marie’s clients. Your files are likely stacked up, spilling out or otherwise in disarray throughout your office. Some of you might be thinking, “You haven’t seen my office. I’m positively fastidious.” Well, then, let’s talk about your digital files.

Every organization — every department, even every computer — could use a little digital organization to increase compliance. Especially when it comes to employee handbooks, compliance training and policies and procedures, your employees likely don’t even know where to find the files. If they can find them, they’re probably out of date anyway.

So, let’s put everything in one place to provide employee access, keep it up to date and save your organization money.



Monday, 18 March 2019 15:28

Compliance Can Spark Joy, Right?

The DDoS threat landscape has developed rapidly leaving many organizations behind in both their perception of the risks and their actions to protect against them. Rolf Gierhard looks at the most dangerous and pervasive misunderstandings about DDoS attacks…

Most organizations understand that DDoS attacks are disruptive and potentially damaging. But many are also unaware of just how quickly the DDoS landscape has changed over the past two years, and underestimate how significant the risk from the current generation of attacks has become to the operation of their business. Here, I’m going to set the record straight about seven of the biggest misconceptions that I hear about DDoS attacks.

There are more important security issues than DDoS that need to be resolved first

When it comes to cyber attacks, the media focuses on major hacks, data breaches and ransomware incidents. DDoS attacks are growing rapidly in scale and severity: the number of attacks grew by 71 percent in Q3 2018 alone, to an average of over 175 attacks per day, while the average attack volume more than doubled according to the Link11 DDoS Report. The number of devastating examples is large. In late 2017, seven of the UK’s biggest banks were forced to reduce operations or shut down entire systems following a DDoS attack, costing hundreds of thousands of pounds according the UK National Crime Agency. And in 2018, online services from several Dutch banks and numerous other financial and government services in the Netherlands were brought to a standstill in January and May. These attacks were launched using Webstresser.org, the world's largest provider of DDoS-on-demand, which sold attack services for as little as £11. It costs a criminal almost nothing and requires little to no technical expertise to mount an attack, but it costs a company a great deal to fix the damage they cause.

What’s more, DDoS attacks are often used as a distraction, to divert IT teams’ attention away from attempts to breach corporate networks. As such, dealing with DDoS attacks should be regarded as a priority, not a secondary consideration. 



GandCrab's evolution underscores a shift in ransomware attack methods

Don't be fooled by the drop in overall ransomware attacks this past year: Fewer but more targeted and lucrative campaigns against larger organizations are the new MO for holding data hostage.

While the number of ransomware attacks dropped 91% in the past year, according to data from Trend Micro, at the same time some 75% of organizations stockpiled cryptocurrency. The majority that did also paid their attackers the ransom, according to a Code42 study. Overall, more than 80% of ransomware infections over the past year were at enterprises, as cybercrime gangs began setting their sights on larger organizations capable of paying bigger ransom amounts than the random victim or consumer.

The evolution of the prolific GandCrab ransomware over the past few months demonstrates how this new generation of more selective attacks is more profitable to the cybercriminals using it - and underscores how the ransomware threat is far from over.



Monday, 18 March 2019 15:25

Ransomware's New Normal

FEMA’s Integrated Public Alert & Warning System (IPAWS) now includes a new event code called Law Enforcement Blue Alert, or ‘Blue Alert’. 

The new BLU event code is available for selection with the IPAWS Emergency Alert System (EAS), with future plans to release it to Wireless Emergency Alerts (WEA).

The ‘Blue Alert’ provides officials with the ability to alert the public when a law enforcement officer has been injured, killed or is missing. The alert will push real-time information to the public, like the location of the incident and any identifying information – such as suspect or vehicle description – to help locate possible suspects.

Blue Alerts will be transmitted to television and radio stations with EAS and later to cellphones and wireless devices with WEA. Similar to current Amber Alerts for missing children, Blue Alerts enable agencies to rapidly disseminate information to other law enforcement agencies, the public and media outlets.



So you’ve just been put in charge of business continuity at your organization. What’s the first thing you should do? In today’s post, we’ll tell you—and also explain why it’s important and how to go about it.


Many people find themselves thrust into a business continuity (BC) role with little warning or preparation.

They frequently come from backgrounds in risk management, auditing, compliance, or IT.

It’s a daunting prospect to suddenly find yourself in charge of Business Continuity/Disaster Recovery (BC/DR) for even a small organization. It’s like being thrown in the deep end as a beginning swimmer.

Unless you have ice in your veins, or significant BC/DR experience elsewhere, you’re likely to feel overwhelmed. You will have to take time to educate yourself on your new responsibilities, and the learning never stops.

But the very first task is always the same.



Cybrary’s Joseph Perry shares the importance of corporate responsibility and how to navigate the operational and reputational challenges in response to a breach.

The rise of data breaches is well-documented, with thousands taking place every year and at least two or three annually for most organizations. In other words, it’s a question of when – not if – your organization will be affected.

With the element of surprise long gone, so too are any excuses for not having a strategy in place for managing these breaches. And in light of the fact that privacy and cybersecurity are now high-profile concerns in the public eye, it’s increasingly clear that any successful strategy will be built on a solid foundation of corporate responsibility.

Let’s take a closer look at why enhancing corporate responsibility is such an important – and often neglected – component of surviving a breach with your reputation intact. Then I’ll share four practical tips to help move the needle in that direction for your own company.



(TNS) — Residents in Lancaster and DeSoto had an unwanted wake-up call Tuesday when a malfunction set off warning sirens.

The sirens sounded around 2:20 a.m. and didn't go silent until sometime after 3. But unlike Saturday morning, there was no severe weather in the area.

"The Emergency Outdoor Warning Sirens have malfunctioned and are automatically sounding. We are currently working to address the concern, and will provide follow-up as quickly as possible," read a post on the city of Lancaster's Nextdoor page. "Sorry about the inconvenience."

At 4:11 a.m., the city of DeSoto issued a tweet that read, "Hopefully, by now they are all quiet."

The city also alerted residents via its CodeRed notification system saying everything was all clear and there was no emergency.



Researchers have developed a new model which shows that the probability of a catastrophic geomagnetic storm occurring is much lower than previously estimated; but the risk still needs to be taken seriously.

Three mathematicians and a physicist from the Universitat Autònoma de Barcelona (UAB), the Mathematics Research Centre (CRM) and the Barcelona Graduate School of Mathematics (BGSMath) have proposed a mathematical model which allows making reliable estimations on the probability of geomagnetic storms caused by solar activity.

The researchers, who published the study in the journal Scientific Reports (of the Nature group) in February 2019, calculated the probability in the next decade of a potentially catastrophic geomagnetic storm event, such as the one which occurred between the end of August and beginning of September 1859, known as the ‘Carrington Event’. Such an event could create major issues for telecommunications and electricity supply systems across the Earth.

In 1859, astronomer Richard C. Carrington observed the most powerful geomagnetic storm known to date. According to this new research, the probability of a similar solar storm occurring in the following decade ranges from 0.46 percent to 1.88 percent, far less than the percentage estimated before.



(TNS) — A tornado was confirmed in Loving Tuesday night, as heavy wind, rain and hail moved through Eddy County and southeast New Mexico into West Texas.

Eddy County Emergency Manager Jennifer Armendariz said video footage confirmed the tornado touched down at about 5 p.m. in Loving in southern Eddy County.

She said no damage was reported despite accounts of golf-ball-sized hail, and after about two hours the storm had mostly cleared.

Multiple shelters were set up throughout the county and Armendariz said staff was sent home by about 7 p.m.

A unit from the Eddy County Office of Emergency Management was sent out to Loving to perform "recon," Armendariz said, and assess the damage.



Where do I start?

This is a conversation and situation I’ve had many times with different people, and it may feel familiar to some of you. You’ve been tasked with developing a BC/DR program for your organization. Assuming you have nothing or little in place, and what you do have is so out of date that you’re feeling that it would be wise to start fresh. The question invariably comes up: Where do I start?

Depending on your training or background this may start with a Business Impact Analysis (BIA) in order to prioritize and analyze your organization’s critical processes. If you have a security or internal audit background you may feel inclined to start with a Risk Assessment. You may have an IT background and feel that your application infrastructure is paramount, and you need a DR program immediately. If you’ve come from the emergency services or military, life and safety might be at the foremost in your mind and emergency response and crisis management might be the first steps. I’ve seen clients from big pharmaceuticals that need to prioritize their supply chain as their number one priority.

The reality is that although there are prescribed methodologies with starting points outlined in best practices by various institutes and organizations with expertise in the field, there is only one expert when it comes to your organization. You.



Most organizations are doing all they can to keep up with the release of vulnerabilities, new research shows.

Security has no shortage of metrics — everything from the number of vulnerabilities and attacks to the number of bytes per second in a denial-of-service attack. Now a new report focuses on how long it takes organizations to remediate vulnerabilities in their systems — and just how many of the vulnerabilities they face they're actually able to fix.

The report, "Prioritization to Prediction Volume 3: Winning the Remediation Race," by Kenna Security and the Cyentia Institute, contains both discouraging and surprising findings.

Among the discouraging findings are statistics that show companies have the capacity to close only about 10% all the vulnerabilities on their networks. This percentage doesn't change much by company size.



About this time each year – when the SEC’s Office of Compliance Inspections and Examinations (OCIE) releases its annual Examination Priorities – we are reminded of how complex compliance can be for SEC-registered firms. As Duff & Phelps’ Chris Lombardy explains, this year is no exception.

In its 2019 Examination Priorities, issued on December 20, 2018, OCIE has outlined six themes that it will primarily, but not exclusively, focus on in the coming months. One new theme, digital assets, joins the five priorities that repeat from 2018:

  1. Matters of importance to retail investors, including seniors and those saving for retirement
  2. Compliance and risk in registrants responsible for critical market infrastructure
  3. Select areas and programs of FINRA and MSRB
  4. Digital Assets (cryptocurrencies, coins and tokens)
  5. Cybersecurity
  6. Anti-money laundering



Wednesday, 13 March 2019 15:17

How Defensible Is Your Compliance Approach?

Attackers used a short list of passwords to knock on every digital door to find vulnerable systems in the vendor's network.

The recent cyberattack on enterprise technology provider Citrix Systems using a technique known as password spraying highlights a major problem that passwords pose for companies: Users who select weak passwords or reuse their login credentials on different sites expose their organizations to compromise.

On March 8, Citrix posted a statement confirming that the company's internal network had been breached by hackers who had used password spraying, successfully using a short list of passwords on a wide swath of systems to eventually find a digital key that worked. The company began investigating after being contacted by the FBI on March 6, confirming that the attackers appeared to have downloaded business documents. 

Password spraying and credential stuffing have become increasingly popular, so companies must focus more on defending against these types of attacks, according to Daniel Smith, head of threat research at Radware.



Wednesday, 13 March 2019 15:15

Citrix Breach Underscores Password Perils

(TNS) - Next month marks the ninth anniversary of the British Petroleum Deepwater Horizon oil rig explosion off the coast of Louisiana that killed 11, injured 17 others, and spewed millions of gallons of oil into the Gulf of Mexico.

For those of us closest to the accident, the April 20, 2010, explosion will always be, first and foremost, a grave tragedy. But for analysts who study such things, the mishap is also something else: a case study yielding insights about how similar mistakes might be prevented in the future.

Or so we’ve been reminded by “Meltdown,” a 2018 book by Chris Clearfield and András Tilcsik that’s just been published in paperback. The subtitle of “Meltdown” is “What Plane Crashes, Oil Spills, and Dumb Business Decisions Can Teach Us About How to Succeed at Work and at Home.”

Clearfield is a former derivatives trader who lives in Seattle. Tilcsik, who researches organizational behavior, lives in Toronto. “Meltdown” is about a number of systems failures, including Deepwater Horizon, a crash on the Washington, D.C. metro, and an accidental overdose in a state-of-the-art hospital.



Flexible workspaces are saving companies time and money when disaster strikes, says Joe Sullivan, Head of Workplace Recovery Product at Regus

According to the 2019 WEF Global Risks Report, ‘extreme weather events’ are the biggest risk we face as an international community, with natural disasters, data fraud and cyber-attacks following close behind. Preventing the unpredictable is beyond our control. What we can manage, however, is our level of preparation when disaster strikes.

At Regus, we speak from experience. In September 2018, the effects of Hurricane Florence impacted some of our centres in North Carolina, South Caroline and Virginia. The devastation was felt by so many of our colleagues, clients and their friends and family. Thankfully, our North America teams were read to step in and help recover these facilities while taking care of our customers.

The financial cost of disasters such as this can be difficult to absorb. Since 2000, natural disasters have cost the global economy more than $2.4trn – more than $150m each year. But it’s not just the headline-grabbing incidents that affect businesses. It’s the everyday ones, too.  A burst water pipe in your office may not sound like much of a threat but, if it means your premises are unusable for a month, what’s your back-up plan?



A new guide from the Cloud Security Alliance offers mitigations, best practices, and a comparison between traditional applications and their serverless counterparts.

Serverless computing has seen tremendous growth in recent years. This growth was accompanied by a flourishing rich ecosystem of new solutions that offer observability, real-time tracing, deployment frameworks, and application security.

As awareness around serverless security risks started to gain attention, scoffers, and cynics repeated the age-old habit of calling "FUD" — fear, uncertainty and doubt — on any attempt to point out that while serverless offers tremendous value in the form of rapid software development and huge reduction in TCO, there are also new security challenges.



Wednesday, 13 March 2019 15:12

The 12 Worst Serverless Security Risks

Courtesy of Mail-Gard




drj 2019 previewMail-Gard has the opportunity to exhibit at many industry shows and conferences, but one of our go-to events is the DRJ Spring conference, which is being held March 24­–27 at the Disney Coronado Springs Resort in Orlando, FL. We can always count on the Disaster Recovery Journal (DRJ) to host an informative and invaluable conference that attracts speakers and attendees from all areas of the business continuity (BC), disaster recovery (DR), and risk management (RM) fields. For us, it’s a chance to connect with leaders and participants in our shared industry.

Risk Management is the Focus of DRJ Spring 2019

The theme of this spring’s conference is “Managing Risk in an Uncertain World,” and it’s certainly true that our world has become unpredictable in many ways. One of the things we’ve learned at Mail-Gard is that it’s truly impossible to plan for every possible emergency situation, but what we can do is to plan and prepare to manage the risks that we’re aware of and to refresh our recovery solutions on a regular basis so change and uncertainty become manageable, as well.

The DRJ Spring 2019 Conference gives us the opportunity to meet with current clients looking to polish up their DR plans while enhancing their industry knowledge by taking a few classes. In addition, we also get to talk to people who either don’t have a DR plan at all, or who have realized that their DR vendor isn’t working. In either case, this is where Mail-Gard shines, because our focus is helping companies achieve their risk management goals. We assist companies in designing print-to-mail recovery solutions or helping them fix what’s wrong with their current plan.

For Mail-Gard, another advantage of attending the DRJ Spring 2019 conference will be the opportunity to brush up on the latest trends in BC/DR, such as cyber security, which is a moving target for planning and updating procedures. In fact, DRJ states, “When it comes to business continuity, what worked a year ago will not be effective today,” which is why risk management is a never-ending job. As a print-to mail disaster recovery provider, Mail-Gard represents a different element within the larger BC/DR arena, but it’s a vital part of a successful BC/DR plan. In fact, we consider it the most important component, which is why it’s the sole focus of our business.

As a DR print-to-mail specialist, Mail-Gard has many advantages over our competition who offer DR mailing support as a sideline. Critical mailings are critical for a reason, whether financial or regulatory, and it’s surprising how often they’re overlooked or minimized in favor of the trending DR issues of the day. If you’re in Orlando during the last week in March, please stop by Mail-Gard Booth #706 at DRJ Spring 2019. The Mail-Gard group would welcome the opportunity to help you make sure that your DR plan is cleaned up, complete, and ready for spring.

MichaelHMichael Henry

Vice President of Mail-Gard with more than 30 years of experience in direct mail. Specializes in leading and directing operations teams by simplifying, staying focused, and being relentless. Proud to be part of an organization that cares about its people. Longtime Philadelphia Eagles season ticket holder who also loves the Phillies and Flyers, being near the water, and coaching his kids’ sports activities.

My colleagues J. P. Gownder, Craig Le Clair, and I just published the results of a year-long study to answer the question “What happens when digital business systems and physical-world processes come together?” The answer: Atoms get their revenge. By that we mean that so much of our attention has been focused on digital business over the past decade that we have almost forgotten where business happens — in the real world.

What about eCommerce, online trading, and digital platforms? Yes, they are digital, but at the end of the day, it is still humans —sitting at their desks, in hotels, on airplanes, in the plant, at ball games, or at conferences — that drive most of the decisions around who buys what and how much, even if they’re made by programming algorithms. And all of that happens in the world of atoms. A big takeaway from our report is that when algorithms start to act on the physical world, firms have the opportunity to change their relationship with their customers. In other words, algorithms plus atoms balance the power between customers and businesses. We see savvy businesses deploying algorithms in the real world to balance customer engagement and efficient operations.

Consider, for example, innovative startup DocBox. It makes a clinical process management solution for hospitals that promises to help clinicians eliminate medical mistakes, improve clinical workflows and processes, and free up time. At the heart of its solution is a “patient area network” that integrates data from bedside machines, making insights available to doctors. While that is good for doctor and patient engagements, providers are exploring how to drive intelligence into logistics and operations to ensure that high-value capital equipment is placed and used efficiently as well.



Evan Francen, CEO of FRSecure and Security Studio, makes the case for adopting a third-party information security risk management (TPISRM) program. He outlines how to get started and explains why the common excuses for ignoring the risks don’t hold water.

Third-party information security risk management (TPISRM*) is more critical today than it’s ever been. There is little doubt amongst information security experts that TPISRM is essential to the success (or failure) of your information security efforts, but the confusion in the marketplace is making it difficult to tell truth from hype. Ignoring the risks won’t make them go away, so something must be done. We just need to make sure it’s the right “thing.”

The Case for TPISRM

If the case for TPISRM isn’t obvious to you, you’re not alone. Only 16 percent of the 1,000 Chief Information Security Officers (CISOs) surveyed in a recent study claim they can effectively mitigate third-party risks, while 59 percent of these same CISOs claim their organizations have experienced a third-party data breach.

Third parties are implicated in up to 63 percent of all data breaches and regulators are increasingly scrutinizing how organizations handle third-party risks. Your organization can spend millions of dollars on a secure infrastructure, best-in-class training and awareness solutions and the most skilled professionals, but if you neglect to account for third-party risks, some or all of your investment is a waste.

Please let these numbers sink in for a moment. Logically, how do we deny the need for sound and cost-effective TPISRM when we know that it will decrease the likelihood and impact of a data breach? Logic says one thing, yet 57 percent of organizations don’t even have an inventory of the third parties they share sensitive information with.



Page 1 of 2