Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 31, Issue 1

Full Contents Now Available!

Industry Hot News

Industry Hot News (350)

(TNS) - The catastrophic mudslide that inundated houses in Montecito in Santa Barbara County in January, killing 21 people, appeared to hit suddenly. But the disaster, mere weeks after a wildfire scorched the area, didn’t come out of nowhere.

For over two decades, Cal State Fullerton’s Binod Tiwari has studied such mudslides and landslides around the world, including in Southern California, to understand their causes and mitigate their devastation.

In 2014, the civil and environmental engineering professor and his students worked on a regional study on debris flow and mudflow after a series of December storms. The study included areas affected by the Silverado Canyon fire and the 91 Freeway fire, both in September 2014. It found that reports of mudflows and mudslides appeared to be exclusively in areas that burned that year or the year before.



Linux has an enviable reputation as a secure platform for servers. But Linux the Unhackable?

Certain myths persist about the inherent resistance of Linux to viruses and the superfluity of firewalls.

However, the only basis for truth (and fast fading at that) is statistical. Linux as a minority platform attracted less hacker interest, who made fewer viruses to attack it.

As Linux’s popularity has grown, so has the number of viruses, not to mention the need for additional firewalls.

Linux is no more unhackable than other operating systems. You can however reduce its hackability with some simple precautions that unsurprisingly look like steps you would take for other systems.



Thursday, 22 February 2018 16:25

Linux the Unhackable? That All Depends ...

(TNS) - Local government officials from coastal communities battered by Hurricane Harvey voiced anxieties and frustrations about the recovery process - and the fact that hurricane season is only three months away - to a Texas House subcommittee Tuesday.

Just days before the six-month anniversary of the devastating hurricane, the Texas House Appropriations Subcommittee on Disaster Impact and Recovery met in Victoria, where mayors and county leaders shared lists of projects needed to be undertaken so communities would be protected from future storms.

Many of the government leaders who came from communities spanning from Fulton to Victoria said they didn't have a place for residents or first responders to take shelter.



Blockchain is the underlying distributed ledger technology for cryptocurrencies such as Bitcoin; it has been at the forefront of business news in the last two years. Fortunes have been built and lost buying and selling cryptocurrencies. In one case, a nice gentleman threw CD containing his private keys, losing all access to his bitcoin portfolio. He petitioned the city to allow him to climb through the dump to salvage a CD that will give whoever finds it access to millions of dollars in bitcoin. There have been countless initial coin offerings promising to revolutionize business with underlying applications of blockchain technology. An organization created digital cats, called Cryptokitties, and a single, rare, digital cat can fetch close to $100,000. We see headlines and blog posts like the following:

Clearly, the hype cycle is in full swing. Interestingly, though, many people have very little understanding of the capabilities and limitations of blockchain technology. Moreover, the hype cycle has caused business leaders to spend time investigating use cases that are not necessarily good fits for blockchain.



Sometimes there is a good reason to reinvent the wheel—for example, if you are in business and the current “wheel” is a proprietary product controlled by your competitor.

However, sometimes the tried-and-true solution is the best way to go, and we believe that is the case when it comes to emergency management systems.

An emergency management system is the methodology an organization uses for managing emergencies.

Having such a system is critical for the protection of your organization since if and when you do face an emergency, your problems can be made significantly worse if your response is hampered by role confusion and poor communication.

So you should definitely have an emergency management system in place—but what kind of system?



Crisis Management Teams Should Always Have a Toolkit That Supports Them During the Crisis.

One of the questions that I get asked most often is “what are some of the most common mistakes you see as you visit various clients?”  Other than companies not being committed to exercising (another article, another time) my biggest concern is with companies that treat crisis management as something that they would pull together in an ad-hoc fashion.  They may have a Crisis Management Plan, but they don’t treat crisis management as a Program – and therefore aren’t developing, exercising, measuring and refining their tools.  Ideally, your Crisis Management Team has these six tools in their tool belt.



Thursday, 22 February 2018 15:38

Every Crisis Team Should Have These Six Tools

How will your business respond if faced with a natural disaster, a cyberthreat or an active shooter scenario?

Will the organization stay afloat in the midst of such a crisis? Any amount of disruption costs your business money and can destroy customer relations. In fact, 75 percent of companieswithout a continuity plan fail in three years after facing a disaster. Those companies unable to get back up and running in 10 days post emergency do not survive at all.

business continuity plan provides your company with the roadmap to navigate a major business disruption, including a natural disaster or large-scale emergency. However, having a plan in place is only the first step; the plan also needs to be continuously monitored and tested for gaps or obstacles.



Last year, major investments and advancements were made in communication technologies, both within the mobile space and the Internet of Things (IoT).

Additionally, we saw continued advancements in virtual reality and increased video conferencing. Unsurprisingly, social media platforms remain a viable contender in the way we communicate. As you consider how to improve your organization with better emergency notification and communication plans this year, take notice of how top trends can solve your biggest problems.



Wednesday, 21 February 2018 16:07

Emergency Management Trends in 2018

(TNS) — Four months after a ferocious firestorm devastated communities in California's wine country, those who lost their homes are still struggling.

Animal feeding stations remain on roadsides, monitored by volunteers searching for pets left behind when their owners fled. Cats that had been feared dead continue to be found.

Signs are everywhere, advertising the services of contractors, engineers, debris removers and lawyers. Many burned homes have yet to be cleared.

The shock and horror of the early days have given way to lingering grief and agony over whether to rebuild or move on. But the most perplexing and time-consuming matter for victims has been insurance.



How Effective and Efficient is Your Process?

In-house investigation teams are expected to be agile to be able to help organizations in addressing cases that come from whistleblowing channels and sustaining the ethical standards of the organization. By supporting your investigation teams with measures to assess effectiveness and efficiency you can help make the teams more agile and enable them to deliver better results.

The efforts of in-house investigation teams are integral to upholding organizations’ ethics and compliance. These teams must navigate through mazes of evidence to uncover violations and take appropriate disciplinary actions. With more cases, evolving case trends and increased expectations of senior management and boards of directors, investigation processes need to be dynamic, effective and efficient. However, unlike other processes, measuring effectiveness or efficiency in this process can be difficult.

Measuring effectiveness means assessing whether the efforts of the investigation team result in visible changes in the ethics environment or types of cases it sees. Measuring efficiency means assessing how long it takes to close the case and recover from fraud losses.

Organizations can use these tips to evaluate the effectiveness and efficiency of their in-house investigation processes:



Wednesday, 21 February 2018 16:05

7 Tips To Evaluate In-House Investigations

The business world is changing. Not that we have to tell you that. The rise of cloud computing has brought with it a host of non-traditional options for how companies can structure their business operations.

Studies suggest that between 80 and 90 percent of the US workforce would like to work remotely at least part-time. In fact, 3.7 million US employees already do.

And that number is on the rise as companies shift to cloud technologies to decrease the overhead associated with physical locations and create better work-life flexibility for their employees.



Business continuity is good for your business, but is it also a legal requirement? Laws and regulations differ from one country or one industry to another, although there is a basic expectation that organisations will act responsibly.

Data integrity, security and availability are part of those expectations, implicitly or explicitly.

Due diligence is now a concept that extends beyond mergers and acquisitions. It also covers compliance with various standards of IT and data management. So, how might this affect your enterprise?

In Australia, regulations to be observed concerning business continuity and disaster recovery exist for specific sectors such as finance.

Austraclear, the organisation providing settlement services for the Australian Stock Exchange, specifies obligations for “participants” to put BCP in place.



Tuesday, 20 February 2018 15:44

Legal Requirements for Business Continuity

Key Concerns for Private Funds in 2018

With the public equity markets at an all-time high and private equity fundraising setting new records, it might seem counterintuitive to forecast litigation and regulatory risks. The opposite is true. Disputes typically follow capital, and the steeper the growth curve, the greater the risk of litigation and regulatory scrutiny. With that backdrop, we are pleased to present our Top 10 Regulatory and Litigation Risks for Private Funds in 2018.



Tuesday, 20 February 2018 15:42

The Top 10 Regulatory And Litigation Risks

(TNS) - The University of Iowa’s emergency preparedness — including its ability to handle bomb threats, health crises and hostage situations — has “unacceptable weaknesses” that expose the campus to “unacceptable risks,” a new audit reports.

The report, completed in October and made public last week, found problems with the UI’s emergency policies and plans, its training protocols, its communication strategies and its incident follow-ups.

“Incident and emergency exercise information is not documented fully, completed timely, distributed appropriately, or reviewed for possible improvements,” the report from the UI Office of Internal Audit determined. “The lack of appropriate distribution of emergency information results in delays for corrective actions.”



(TNS) - As flags were being lowered to half-staff after Wednesday’s Parkland, Florida, school shooting, school administrators here were fielding telephone calls from concerned parents.

“I’m fed up with school shootings,” said Carl Murphy, an Eastmont parent who called The Wenatchee World after talking to his child’s school principal. “I want to know why anyone can walk into a school and cause whatever harm they choose.”

Similar calls and emails from parents worried about school security in the wake of the shooting that killed 17, prompted both Wenatchee and Eastmont superintendents to post letters of assurance to community and staff members.



The world’s population is ageing, just like us. As we enter the era of “super-aged societies,” governments, communities and businesses need to adapt. A new ISO technical committee has just been formed to help.

In 2017, the number of people aged 60 years or over worldwide was more than twice as big as in 1980, and it is expected to double again by 2050 to reach nearly 2.1 billion. The changing demographics of our society brings with it pressures and challenges ranging from everything to healthcare to the local bus. But opportunities, too, are rife. The recently formed ISO technical committee ISO/TC 314, Ageing societiesaims to develop standards and solutions across a wide range of areas, to tackle the challenges posed as well as harness the opportunities that ageing populations bring.

ISO/TC 314 Secretary Nele Zgavc from BSI, ISO’s member for the UK, said dementia, preventative care, ageing workforces, technologies and accessibility are just some of the areas of standardization that the committee proposes to work on. “Ageing societies have global implications,” she said. “Governments and service providers need to effectively cater to the needs of their populations as they age for the benefit of society as a whole. There is a crucial need for standards to support this in order to provide a high-quality level of service and harness the opportunities that ageing societies hold.”



(TNS) - Triangle blood donation centers that supply area hospitals are experiencing a drop in donors as a national flu epidemic is keeping people home.

The Blood Connection announced an urgent need for all blood types this week, saying the flu outbreak has cut blood inventories by at least 10 percent. The organization lost two days worth of blood from cancelled blood drives.

The American Red Cross Carolinas said bad weather forced the cancellation of 121 blood drives in January, resulting in the loss of about four days of blood collections. The organization is also seeing a lower donor turnout this month because of the flu.



Hurricanes, wildfires, earthquakes and floods strike communities every year, injuring and displacing thousands. A plan and an emergency kit are important, but they only go so far. Ideally, your whole community should be ready, and if you don’t think it is, here’s how you can help make sure.

Organizing your neighbors with a plan in case the worst happens is no simple feat. It’s difficult enough for most of us to plan for our own families, much less a dozen in our building or on our block. So how do you do it?

We spoke to Mitch Stripling, the assistant commissioner of Agency Preparedness and Response for the New York City Department of Health and Mental Hygiene and the co-host of “Dukes of Hazards: The Emergency Management Podcast,” about the best ways to get everyone in your area aware and prepared for the types of disasters that are most likely to impact your community.



Monday, 19 February 2018 15:21

How to Prepare Your Community for a Disaster

If I told you about something you could do that would swiftly vault your organization into the ranks of the elite, in terms of your business continuity management program, would you do it? Would you at least be interested in learning more about it?

There is such a step you can take, and it’s so easy, inexpensive, and helpful in terms of the direction it can give your BC program that I’m always amazed that more companies don’t do it. In fact, I would say that fewer than 10 percent of the organizations have implemented this measure, based on the informal surveys I conduct when I speak at business continuity events around the country.

What is the step I am talking about? Adopting a business continuity standard for your organization.

Now, when I say it is easy to adopt a standard I am not saying that coming into compliance with one is necessarily a piece of cake. Some standards are tougher than others to align with and some are very hard to meet indeed (here’s looking at you, FFIEC—and if you don’t know what I mean by “FFIEC” keep reading).



This is part 3 of a 3-part series on digital blueprints.

Digital transformation has tremendous potential to unleash value for organizations; thus, more and more organizations are formulating digital strategies.  However, many are missing significant value and opportunities that are made possible by a holistic digital strategy.  Many digital strategies are focused too narrowly.  For example, leaders claim they are achieving the digital strategy by moving applications and infrastructure to the cloud.  A digital strategy establishes the enterprise vision and priorities for digital transformation.  To power your digital transformation, leverage a digital blueprint–  a structured approach used to evaluate opportunity areas, value drivers, and risks, and align the digital path with business drivers.



Making the Investment to Shift Risk Culture

Risk culture, though difficult to define, is one of the most mentioned topics by Fortune 500 executives and for regulators across several industries. However, despite this visibility in quarterly calls, creating, measuring and influencing risk culture continues to defy easy answers for organizations. Yet – as Matt Shinkman and Chris Matlock detail – it is this very challenge that makes tackling risk culture in 2018 a strategic opportunity that pays dividends beyond compliance.

Over the past decade, organizations have made great strides in improving their risk management processes and systems. While this has generally helped senior leaders understand their biggest risk exposures, progressive organizations are now turning their attention to the need for a cultural shift where employees embed risk management in their day-to-day workflows. Our conversations with heads of enterprise risk management (ERM) at over 300 large, global organizations have surfaced a multitude of questions; yet the question, “How do I define and improve risk culture?” is one of the most common. Moreover, it’s a growing concern and interest among financial regulators globally. However, despite this heightened visibility, defining and influencing risk culture continues to defy easy answers for many organizations.

To start, there is no clear sense for what risk culture actually is or how to influence it. Discussions on risk culture sound similar to the parable about the blind men and the elephant, where each person touches a different part of the animal and makes their own judgments about what it is. As a result, we end up defining risk culture in simple terms: the deeply held assumptions, beliefs and values shared by an organization’s employees with respect to risk management.



How to Mitigate Risk and Liability

When allegations of misconduct are raised, leadership should quickly turn its attention to an internal investigation. Depending on the nature of the supposed wrongdoing, the matter may need to be investigated quickly. But a haphazard investigation won’t do. Jeffrey Klink offers seven steps to a successful investigation.

Businesses regularly confront allegations of internal misconduct. These allegations can involve breaches of the law or the business’s policies or procedures. Successfully navigating the potential pitfalls of internal investigations is essential to protect your brand and important assets, as well as to avoid the risk of having to deal with additional problems resulting from adverse media coverage. This article outlines seven steps that will assist corporate counsel, owners and others in managing and mitigating internal misconduct allegations.

Many professionals like to make internal investigations confusing. But the reality is that there are basic steps that can be taken to determine if a misconduct allegation has merit and if a comprehensive investigation is required. The first step is determining whether an allegation has merit; if it does, then some or all the next steps may be required. Step two is assigning a case supervisor and other professionals to conduct the investigation.  Steps three to seven are: (3) obtain and review all pertinent data and documents; (4) conduct discreet background research on significant parties and subject(s); (5) interview knowledgeable persons; (6) interview subject(s); and (7) assess which internal controls and procedures can be improved to avoid future problems.



What’s that old saying? “Build a better mousetrap, and the world will beat a path to your door?”

The phrase is credited to Ralph Waldo Emerson, although the exact wording is up for debate. Regardless, the sentiment has caught on over the past 150 years and today, more than 4,400 patents have been issued by the U.S. Patent office for original mousetraps. A “better mousetrap” is a metaphor for any innovation that solves a problem – Scotch tape, revolving doors, Velcro. Identify a problem, observe the world around you, and experiment (repeatedly) for a solution.

“Cloud computing” has been ripe for innovation over the last 12 years, since the term was added to the mainstream vernacular when Amazon.com released its Elastic Compute Cloud product in 2006. Various flavors of cloud have been introduced (public, private, hosted private, hybrid, etc.), and today, there are so many companies offering “cloud services” that Forbes publishes a Forbes Cloud 100 list of new vendors each year.



Damage to reputation or brand, cyber crime, political risk and terrorism are some of the risks that private and public organizations of all types and sizes around the world must face with increasing frequency. The latest version of ISO 31000 has just been unveiled to help manage the uncertainty.

Risk enters every decision in life, but clearly some decisions need a structured approach. For example, a senior executive or government official may need to make risk judgements associated with very complex situations. Dealing with risk is part of governance and leadership, and is fundamental to how an organization is managed at all levels.

Yesterday’s risk management practices are no longer adequate to deal with today’s threats and they need to evolve. These considerations were at the heart of the revision of ISO 31000, Risk management – Guidelines, whose latest version has just been published. ISO 31000:2018 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions. Following are the main changes since the previous edition:



Thursday, 15 February 2018 15:54

The new ISO 31000 keeps risk management simple

It’s the rare workplace disaster that is dramatic enough to burst into the news. Most business emergencies are managed privately, with the outside world unaware that anything was ever wrong.

This is great from the companies’ point of view. It’s bad enough having a disaster without having a public-relations distraction piled on top of it.

However, there is a drawback for the business-continuity community: All this discretion deprives people outside the firm of the benefit of their peers’ experience.

One interesting consequence of this is that many BC professionals have a limited concept of the kind of negative events that can impact their organizations. Everyone knows about fires, hurricanes, and cyberattacks—they’re in the news all the time—but many other kinds of things can disrupt a business and frequently do, including things most people couldn’t even imagine.



While cyber security may have you thinking in zeros and ones, and wondering which next generation firewall you should buy next, the human element is alive and well in cyber crime.

Indeed, it can be argued that cyber crime only exists because human beings are motivated to take or break digital assets that do not belong to them.

So, while you mull over your cyber security defence, it may be helpful to consider how criminologists view the matter, especially in terms of crime displacement, a natural result of any security strategy.

Basically, the idea behind crime displacement is that if you stop criminals (including cyber criminals) from perpetrating crime in one way, they may well look for another way. Professional hackers including teams working for governments are more likely to develop other lines of attack.



Thursday, 15 February 2018 15:52

Cyber Security and Pointers from Criminology

(TNS) - An American nightmare unfolded Wednesday afternoon at a North Broward high school when a former student came onto campus and opened fire, killing and injuring multiple people.

Details remain cloudy amid a flurry of police activity at Marjory Stoneman Douglas High School in Parkland off the Sawgrass Expressway. Students, who heard a fire alarm go off just before dismissal, followed by guns shots, fled off campus and hid under desks as police sped to the scene. Parents, blocked from getting onto campus, stood by helpless.

The Broward Sheriff’s Office is reporting “at least 14 victims.”

The shooter, a former student identified by law enforcement sources as Nicolas de Jesus Cruz, managed to make it off campus. He was cornered and taken into custody in a townhouse at Pelican Pointe at Wyndham Lakes in Coral Springs.



Looking back on the past decade, few would argue that certain man-made threats – active shooters, cybercrime, and workplace violence – are on the rise.

What are the facts behind these incidents?  And is your organization prepared to respond should they occur at your workplace?  Let’s take a closer look.



What Compliance Should Be Doing Now

CCI has covered the General Data Protection Regulation (GDPR) extensively, and by now most readers may know that the deadline for GDPR compliance is barreling toward us. Kevin Gibson walks us through what businesses must do to prepare.

May 25, 2018, the day on which the General Data Protection Regulation (GDPR) takes effect, is fast approaching. Some firms have been proactively working toward GDPR compliance, which is wise given that failure to do so exposes organizations to fines of up to €20 million (US $23.5 million) or 4 percent of global revenue — whichever is higher. However, it appears that a majority of firms whose business requires them to comply with GDPR have yet to do so and are instead waiting to take action until just before the deadline or worse, after it passes. Such procrastination is ill advised. The GDPR compliance countdown, as outlined here, should start now.



Wednesday, 14 February 2018 15:18

Countdown To The GDPR

(TNS) - Missouri lawmakers are looking at a bill that would authorize college faculty members to carry a gun on campus.

Republican Rep. Dean Dohrman introduced a bill Wednesday to the House Higher Education Committee that would designate full-time faculty as “Campus Protection Officers,” whom can carry concealed weapons.

Those in support believe it will improve college campus safety whereas those who oppose the bill argue it will not.

“I understand that a lot of people feel unsafe in public settings,” said Missouri Democratic Rep. Greg Razer. “My bigger question is ‘What does it mean as a society that we are at a point where we are having to hear a bill in the general assembly that would allow universities to have a designated shooter in their classrooms?’”

Supporters of the bill argue that a designated shooter could help stop a possible shooting.



Wednesday, 14 February 2018 15:17

Bill Would Allow University Faculty to Carry Guns

4 Best Practices to Protect Your Business

It’s been weeks since the Meltdown and Spectre vulnerabilities took the security world by storm, yet we’re still living in a state of chaos and confusion. The best “fix” for these bugs is still forthcoming, and patches should be implemented once they’re available. Michael Lines offers guidance to help you master the art of patching.

By now, you probably know that Meltdown and Spectre exploit critical vulnerabilities in modern processors, allowing malicious programs to steal data that is being processed on a computer. The unforeseen consequences of these hardware design flaws leave us facing a problem unlike anything we’ve ever seen, both in scope and scale (billions of desktops, laptops, smartphones and cloud computing platforms are affected). As a result, hardware and software vendors and researchers are still trying to determine the best “fix” for these bugs, and companies are still struggling to understand the scope of the issue, their vulnerability level and what they can do about it.

Early announcements to replace the impacted CPU chips have rightfully been supplemented with more practical advice to apply appropriate patches as they are released. This, in and of itself, is a complicated process, as patches will need to be applied across a vast array of operating systems, and many of these patches are still to be developed and released.

But there’s no need to panic. Here are several best practices to help you master the patching process.



Tuesday, 13 February 2018 16:45

Lessons Learned From Meltdown And Spectre

Along with Increases in Productivity and Reach Come Real Risks

New collaboration platforms such as Slack and Facebook Workplace are rapidly gaining traction in businesses large and small, especially among younger workers and millennials who gravitate to these tools because they are much more immediate and efficient than trading emails or voice messages. However, such gains in workforce productivity can come with unforeseen compliance risks, especially among heavily regulated financial companies. In this latest piece, Mike Pagani shares how to protect these new communication platforms, which are subject to the same regulatory and compliance requirements as other forms of electronic communications, such as emails and texts.

It feels like it was not that long ago that emails drove the reactive part of our business days, reading and responding to the urgent ones as they popped into our inboxes. Not so, lately.

If your business environment is like most these days, a growing number of your employees are spending more cumulative time in Slack (or a different collaboration platform) on a daily, and sometimes nightly basis for team-related internal communications instead of using email. If you are a private sector business, then your marketing department is also using social media channels and apps to reach new customers and communicate with existing ones a lot more compared to email marketing as well.



If everything is working and you have a business continuity plan in place, is there anything left to worry about? Yes!

Near misses that do not result in interruption could still be critical warning signs that your business continuity is still too fragile.

ISO 22301, the international standard for business continuity, specifies that “organizations must determine what to measure and monitor for BC… and take action when necessary to address adverse trends or results before a non-conformity occurs”.

So, how do you set about identifying and acting on near misses, which in a way are non-events, making them even more challenging to spot?



Sure, this isn’t your typical “wanted” ad, but wouldn’t it be great to be in love with your technology for once instead of constantly fighting with it?

A mass notification vendor should be devoted to ensuring that notification-related tasks are quick, easy, and painless. Relationships are a two-way street, so two-way notifications from your vendor are another must. Love is in the air, so why not fall in love with a new emergency notification system this year?

The Traits You Need

When it comes to emergency communication, the worst-case scenario is a missed connection – one party attempting to provide crucial information and not being able to reach the other party. You are responsible for communicating with a group of constituents, and the message simply must get through. A good system provides you with a variety of ways to reach others — without a lot of fuss and drama. Here’s some key attributes you should be looking for:



Business Continuity Management (BCM) is vital in preparing and protecting business operations from disruptions caused by threats stemming from cyber-attack and natural disasters, as well as resource unavailability such as building loss, technology loss, staff absenteeism, and supply chain failure. A robust business continuity programme manages the likelihood and impact stemming from disruptive incidents through proactive response and recovery planning, with the objective of reducing operational downtime.

As a consultant and former BCM practitioner, I am regularly asked by senior executives, “What are the most essential aspects to focus on when launching a successful BCM Programme?” This article discusses 9 key steps to follow for success.



#1: What Is the Difference Between SOC 1, 2 and 3?

The Service Organization Control (SOC) is a standard of compliance that has three types of certification, aptly named SOC 1, SOC 2 and SOC 3.

SOC 1 is primarily meant for banks, investment firms and other such companies that house financial data, and SOC 2 is for non-financial companies that house or process data, which could happen to be financial or otherwise. It’s this latter certification that software and cloud providers often use to verify their technology controls and processes. Auditors for the SOC frameworks check to be sure of security, accessibility and data protection, using The American Institute of CPAs (AICPA) as their background for standards and Trust Principles.

SOC 3 stands apart from the other certifications, because it doesn’t focus on validating controls and operations. It’s intended for more general purpose disclosures and public visibility (as they don’t typically include confidential info), auditing organizations under the SysTrust and WebTrust seal programs. This certification is usually ideal for organizations that simply want to market a product in comparison to marketplace standards.



Some things are hard to predict. And others are unlikely. In business, as in life, both can happen at the same time, catching us off guard. The consequences can cause major disruption, which makes proper planning, through business continuity management, an essential tool for businesses that want to go the distance.

The Millennium brought two nice examples, both of the unpredictable and the improbable. For a start, it was a century leap year. This was entirely predictable (it occurs any time the year is cleanly divisible by 400). But it’s also very unlikely, from a probability perspective: in fact, it’s only happened once before (in 1600, less than 20 years after the Gregorian calendar was introduced).

A much less predictable event in 2000 happened in a second-hand bookstore in the far north of rural England. When the owner of Barter Books discovered an obscure war-time public-information poster, it triggered a global phenomenon. Although it took more than a decade to peak, just five words spawned one of the most copied cultural memes ever: Keep Calm and Carry On.



In last week’s blog, we talked about the importance of identifying the right BIA impact categories for your business impact analysis.

As a reminder, these are the six categories appropriate to your industry and organization that your management chooses to measure to assess the impact to the organization of disruptions to its operations.

Now that you have selected the right impact categories for your industry and company, you’re ready to determine the weighting you will use for each category.

In today’s blog we’ll talk about what this means, why it’s important, and how to go about doing it.



Friday, 09 February 2018 15:26

How to Weight Your BIA Impact Categories

(TNS) - Religious leaders and business owners on Monday were forced to confront an otherwise unimaginable possibility: What would you do if a gunman opened fire on a crowd here in Delray Beach?  

It isn’t easy to think about, but an all too common reality, said Delray Beach Police Chief Jeff Goldman, whose department offered active-shooter training to locals. 

“Someone asked me the other day, ‘Why are you doing this?’ And I had that deer-in-the-headlight look,” Goldman said. “If you haven’t been watching television,and you haven’t been seeing what’s going on around the world … it’s sad. It’s just sad.”  



VW’s now infamous dieselgate crisis is far from over, and additional punishments and damages still await in the wings.

But after months of PR efforts to regain the trust of customers, regulators and governments, more bad news for the company was revealed in late January.

VW had also been sponsoring laboratory tests that forced lab monkeys into locked chambers to breathe diesel exhaust from a VW Beetle rigged with the sophisticated dieselgate technology — a scam that made the car’s polluting emissions appear far less in a laboratory test than they actually would be on the road. Similar experiments were also performed on humans. VW’s goal was to support its claims that diesel exhausts on new diesel cars were at safe levels in order to advance diesel-friendly public policies in multiple countries. In 2012 the World Health Organization had classified diesel exhaust as a carcinogen.



With the aim of IT service management being to serve the business or the organisation funding the IT, it’s crucial that business requirements drive ITSM projects and procurement.

The tool used for this is often the Statement of Work (SOW), which lays out what is wanted and what is planned.

The challenge comes in making sure that these components are properly linked, and they also relate back to the originating business need.

Formats and contents of statement of work vary, but you can expect to find sections like these:



Thursday, 08 February 2018 18:36

ITSM and Statement of Work

There are, to be sure, both pros and cons to software defined storage. Among the benefits of SDS, it can bring flexibility, openness, and also lower costs. However, software defined storage's distributed nature can be challenging for new operators and complexity can creep in.

Pros and Cons of SDS: the Pros

There is no doubt that software defined storage (SDS) has gained serious traction in recent years. Since its introduction less than a decade ago it has steadily overcome technical hurdles and grown in market share. Its attractiveness, after all, stems from the promise it holds.

“One of the big benefits of software defined storage is the flexibility it offers, including the ability to configure and deploy storage systems your way such as the type of hardware, or on virtual, container or cloud platforms,” said Greg Schulz,” Greg Schulz, Senior Advisory Analyst Server StorageIO, and author of "Software Defined Data Infrastructure Essentials."



Thursday, 08 February 2018 18:36

Software Defined Storage: Pros and Cons

The “six degrees” concept is that you can reach any person in the world using a maximum of six personal relationships in a chain stretching from you to the person you want to reach.

There has been a fair amount of cyber hype over the last 20 years or so, although the idea dates back to 1929.

A Hungarian named Frigyes Karinthy was fascinated by the idea of the world shrinking as connectivity grew and was the first to postulate the theory.

In business continuity, however, the number of degrees has been rising as offshoring and multi-company supply chains have increased.

So, how many degrees or links back must you check for possible business continuity impact on your business from the failure of another enterprise elsewhere?



Wednesday, 07 February 2018 16:10

The “Six Degrees” of Business Continuity

The concept of "software defined storage" (SDS) seems to be everywhere, if you read about the data storage industry or explore storage products.  However, the software defined storage definition remains a little vague – or perhaps more than a little vague.

Most people seem to agree that at first, SDS was little more than a marketing buzzword. It first came into vogue after the OpenFlow project introduced the idea of software defined networking (SDN) around 2011. As vendors like VMware began to embrace the idea of the software defined data center (SDDC), storage vendors saw an opportunity to gain traction for their products with the "software defined storage" label.

But while SDS may have originated as basically a marketing gimmick, the technology that underlies it truly is different than traditional storage hardware. More importantly, enterprises have come to realize that SDS offers substantial benefits over traditional SAN and NAS arrays.



Wednesday, 07 February 2018 16:09

What is Software Defined Storage?

What business continuity or disaster recovery exercises have your performed? Do you know the difference and have distinct goals for them? Do you even do exercises? We receive many questions on this topic, to the point where we thought it might be helpful to devote today’s post to a “Beginner’s Guide to Recovery Exercises.”

In this overview, we’re going to provide some introductory information on this essential topic. Specifically, we’re going to answer the following questions:



Wednesday, 07 February 2018 16:09

Beginner’s Guide to Recovery Exercises

On our latest episode of The Watchdog, our hosts sat down to talk with Michael Gonzalez, Operation Iraqi Freedom veteran and senior physical IT systems administrator at a large utilities company in Hawaii, to talk about his experience during the recent Hawaii false nuclear missile alert. We have the full interview for you here:

Walk us through what those first few minutes after the alarm sounded were like for you. As far as I understand, you were still in bed, weren’t you?

Yeah, it was about 8:03 in the morning when I got the alert on my iPhone. My girlfriend got it at the same time, so we had our phones sitting on opposite bedside tables and we both woke up to a start, thinking it was probably a flash-flood. We looked outside and saw that it was sunny and not raining, so I grabbed my phone. I looked at it and read the message: [“Emergency alert, ballistic missile threat inbound to Hawaii. Seek immediate shelter. This is not a drill.”]. My girlfriend and I both looked at each other, kind of in disbelief, and we got up and walked to the living room. I turned on the television and everything was normal. Business as usual, which kind of threw me off. Next thing I did was track down my kids.



(TNS) - It was supposed to be just another nor'easter, a little storm by New England standards, 10-12 inches at best, but what started on Monday, Feb. 6, 1978 turned in the stuff that legends are made of and it remains, 40 years later, the storm that all other storms are measured by.

"You have to remember, in those days, forecasting was not what it is today," said Michael Dukakis, who was a first term governor in 1978. "It's improved enormously since then."

For those of you too young to remember, the day started out mild, but before nightfall, a weak storm that had formed off the coast of South Carolina ran smack up against an arctic cold front and a weirdly strong band of high pressure over Canada and turned into a kind of perfect winter storm that stalled over New England for almost 36 hours.



Tuesday, 06 February 2018 17:00

Recalling the Storm That Rocked New England

(TNS) — It is easy to forget that slightly more than 100 years ago some of the strongest earthquakes ever recorded rocked Illinois and Missouri, but the Illinois Emergency Management Agency will be reminding people throughout the month of February.

The Metro East is situated near two fault lines — including the New Madrid Fault, which produced the largest earthquakes in the continental U.S. in 1811-1812.

“In addition to the New Madrid Seismic Zone, where the 1811-12 quakes occurred, southern Illinois is also adjacent to the Wabash Valley Seismic Zone,” said IEMA Interim Director Jennifer Ricker. “We can’t predict when the next devastating earthquake in this region will happen, but we can help people learn how to protect themselves and reduce damage to their homes.”



(TNS) — It's not too late to get a flu shot.

That's the message New Yorkers need to know from the New York City Emergency Management Department's latest podcast series.

Dr. Demetre Daskalakis, deputy commissioner for the Division of Disease Control at the city Department of Health and Mental Hygiene, was the featured guest on the latest episode of "Prep Talk."

"Prep Talk" engages listeners on emergency management topics, as hosts talk to guests about keeping New York City safe and prepared before, during and after emergencies.

Dr. Daskalakis discussed the flu vaccine and debunked myths about this season's flu virus. The podcast identified who is at risk to contract the flu and whether you can get sick from the flu shot.



According to the Harvard Business Review, breach of cybersecurity is the biggest internal threat to your company.

Is your business in financial services, manufacturing, or the healthcare industry? In that case, you want to pay particular attention to this article because these are the three industries most likely to be under attack. Here we have detailed a step-by-step process you can implement for your employees to protect against internal cyber threats. Personalize this information to develop the employee awareness program that best serves your industry and needs.



When IFRS 16 comes into effect in January 2019, it will transform the relationship between businesses and their leases, including those for office spaces and other real estate. Here, award-winning financial journalist Melanie Wright explains what the changes mean and why it’s so important for businesses to ensure they’re prepared

Many firms lease a wide range of items to support their businesses, such as office space or vehicles. The latest standard from the International Financial Reporting Standards (IFRS), IFRS 16, is due to come into effect in January 2019, changing how businesses must recognise, measure, present and disclose these leases.



Tuesday, 06 February 2018 16:42

IFRS 16: five things you need to know now

Combined Approach Integrates Technology, Services and Enhanced Cyber Insurance to Make Businesses More Resilient


SAN JOSE, Calif. – Cisco, Apple, Aon and Allianz today announced a new cyber risk management solution for businesses, comprised of cyber resilience evaluation services from Aon, the most secure technology from Cisco and Apple, and options for enhanced cyber insurance coverage from Allianz. The new solution is designed to help a wider range of organizations better manage and protect themselves from cyber risk associated with ransomware and other malware-related threats, which are the most common threats faced by organizations today.

Cyber security risk is growing. Losses from cyber threats are outpacing investment in IT securityi. This fact, combined with low adoption of cyber insuranceii, an active adversary, a fragmented security technology market and a security skills shortage, means it is difficult for many organizations to understand and manage this risk effectively.

The new solution covers the primary dimensions of cyber protection for businesses. The key elements of the offering include:

  •   Cyber Resilience Evaluation: Aon cyber security professionals will assess interested customers’ cyber security posture and recommend ways to help improve their cyber security defenses.

  •   Cyber Insurance:  Allianz evaluated the Cisco and Apple technical foundation of the solution and determined that customers using Cisco Ransomware Defense, and/or qualified Apple products can be eligible for the Allianz-developed enhanced cyber insurance offering, acknowledging the superior level of security afforded to businesses by Cisco and Apple technology. This, in combination with individual risk insights gained through the Cyber Resilience Evaluation, makes possible the enhanced cyber insurance coverage to Cisco and Apple business customers. Enhancements include market-leading policy coverage terms and conditions, including potentially qualifying for lower, or even no, deductibles in certain cases. The cyber insurance coverage is underwritten by Allianz Global Corporate & Specialty (AGCS).

  • Cisco Ransomware Defense is part of Cisco’s integrated security portfolio that leverages industry leading threat intelligence from Cisco Talos to see threats once, and block them everywhere. The solution includes advanced email security, next-generation endpoint protection and cloud-delivered malicious internet site blocking, to strengthen an organization’s defenses against malware, ransomware and other cyber threats. 
  • Apple products: iPhone, iPad and Mac give employees the best experiences at work with the strong security that businesses need. The tight integration of hardware, software and services on iOS devices ensures that each component of the system is trusted, from initial boot-up to installing third-party apps. Users benefit from always-on hardware encryption, as well as support for secure networking protocols like Transport Layer Security (TLS) and VPN out of the box.

  •   Incident Response Services: Organizations will have access to Cisco and Aon’s Incident Response teams in the event of a malware attack.

The new solution is available today. For further information visit https://cisco.com/go/cyberinsurance

Supporting quotes

“At Cisco, security is foundational to everything we do. As the leading enterprise security company, we know that in a digital world security must come first, and our integrated security architecture reduces customers’ overall risk of exposure to ransomware and malware attacks,” said Chuck Robbins, Chairman and CEO, Cisco. “Cisco Security technology is central to the new holistic risk management solution and we are excited to bring another important benefit to our customers with greater options for cyber insurance.”

“The choice of technology providers plays a critical role in any company’s defense against cyber attacks. That’s why, from the beginning, Apple has built products from the ground up with security in mind, and one of the many reasons why businesses around the world are choosing our products to power their enterprise," said Tim Cook, Apple’s CEO. “iPhone, iPad and Mac are the best tools for work, offering the world’s best user experience and the strongest security. We’re thrilled that insurance industry leaders recognize that Apple products provide superior cyber protection, and that we have the opportunity to help make enhanced cyber insurance more accessible to our customers.”

“Ransomware is an evolving risk that impacts every level of an enterprise. Organizations urgently need to be managing these risks from both the technical and the financial perspective,” said Jason Hogg, CEO, Aon Cyber Solutions. “This holistic solution provides our clients with an integrated approach to addressing ransomware risk. We can provide customers with guidance on what cyber defenses, resources and processes to deploy to improve their cyber posture. It’s the improved cyber posture that makes them eligible for enhanced/broader cyber insurance protection.”

”Proactive analysis coupled with the latest technology creates an ideal defense against today’s ever-changing ransomware and malware attacks,” said Bill Scaldaferri, President & CEO, AGCS North America. “This strategic alliance with Aon, Apple and Cisco allows us to provide a unique solution to companies using this integrated platform to manage risk and ultimately strengthen their battle against high-profile threats.”

About Cisco
Cisco (NASDAQ:CSCO) is the worldwide technology leader that has been making the Internet work since 1984. Our people, products, and partners help society securely connect and seize tomorrow’s digital opportunity today. Discover more at newsroom.cisco.com and follow us on Twitter at @Cisco.

About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, Apple Watch and Apple TV. Apple’s four software platforms — iOS, macOS, watchOS and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay and iCloud. Apple’s more than 100,000 employees are dedicated to making the best products on earth, and to leaving the world better than we found it.

About Aon
Aon plc (NYSE:AON) is a leading global professional services firm providing a broad range of risk, retirement and health solutions. Our 50,000 colleagues in 120 countries empower results for clients by using proprietary data and analytics to deliver insights that reduce volatility and improve performance.

Follow Aon on Twitter: https://twitter.com/Aon_plc
Sign up for News Alerts: http://aon.mediaroom.com/index.php?s=58

About Allianz Global Corporate & Specialty
Allianz Global Corporate & Specialty (AGCS), part of the Allianz Group, is dedicated to global corporate and specialty insurance business. AGCS underwrites insurance and provides risk consultancy across the whole spectrum of specialty, alternative risk transfer and corporate business: Marine, Aviation (incl. Space), Energy, Engineering, Entertainment, Financial Lines (incl. D&O), Liability, Mid-Corporate and Property insurance (incl. International Insurance Programs).

Worldwide, AGCS operates in 32 countries with own units and in over 210 countries and territories through the Allianz Group network and partners. In 2016, it employed around 5,000 people and provided insurance solutions to more than three quarters of the 'Fortune Global 500' companies, writing a total of €7.6 billion gross premium worldwide.

AGCS SE is rated AA by Standard & Poor’s and A+ by A.M. Best.


Legacy VM backup solutions do not offer the features and functionality that organizations need in today’s hybrid cloud environments. In addition to the lack of features, the administrative overhead associated with the outdated technologies of legacy backup solutions prevents organizations from operating their backup infrastructure effectively and efficiently. Modern native backup solutions, such as NAKIVO Backup & Replication, provide the feature set that allows organizations to be agile, automated, and cloud-ready.


What Are Legacy Backup Solutions?

Legacy backup solutions are essentially the same backup solutions that have been used on physical machines for years and then moved over to virtual machines. The same technologies, tools, and methodologies are used for virtual machines as for physical machines, with minor enhancements. Typically, a legacy VM backup solution requires installing a “backup agent” inside the guest operating system for the backup solution to be able to back up virtual machines. Even if the legacy solution is able to use the native data protection APIs, they generally still need some sort of agent to be able to perform application-aware backups or granular restores.

Native VM Backup

Native VM backup solutions integrate seamlessly with a virtual infrastructure, are agentless, and take advantage of the powerful built-in API driven interaction provided by today’s virtual infrastructure in addition to many other benefits. In view of the challenges with legacy VM backup solutions mentioned above, let’s take a look at some of the features of NAKIVO Backup & Replication as an example of a native VM backup solution and find out how the product outperforms legacy VM backup solutions in key areas.

Top Reasons to Switch

• Reduce maintenance costs by 50% or more
• Eliminate agents and issues that come with them
• Reduce backup administration time
• Ensure guaranteed recovery with automated backup testing
• Perform VM backup and replication with a single solution
• Instantly recover VMs, files, and application objects
• Stay up to date with industry releases
• Extend data protection to the cloud
• Reduce backup solution footprint

The Reasons to Switch from Legacy Backup

1. 50% of the Cost

Legacy backup solutions are expensive to purchase and maintain. NAKIVO Backup & Replication is not. With trade-in pricing starting from $149/socket, NAKIVO Backup & Replication can reduce your maintenance costs by 50% or more while improving VM data protection and recovery.

2. Better VM Backup

Built for virtualization, NAKIVO Backup & Replication is a fast and reliable VM backup solution for protecting VMware, Hyper-V, and AWS EC2 environments. NAKIVO Backup & Replication offers advanced features that increase backup performance, improve reliability, reduce administration time, speed up recovery, and, as a result, help save time and money.

3. 97.3% Support Satisfaction

At NAKIVO, we regularly survey customers to identify what new features should be added to the product, what issues should be fixed, and how we perform in various areas, including technical support. Survey after survey, customers report their satisfaction with NAKIVO technical support, which reaches 97.3%.

4. NAS-based Backup Appliance

NAKIVO Backup & Replication can be installed on Windows and Linux OS, or deployed as a pre-configured VMware VA, and AWS AMI. In addition, you can install NAKIVO Backup & Replication on an ASUSTOR, QNAP, Synology, or Western Digital NAS to create a reliable and cost-effective VM backup appliance that combines backup hardware, storage, software, and data deduplication in a single device. By installing on a NAS, you can offload your VMware or Hyper-V infrastructure backup workloads, separate data protection from the virtualized environment, and boost VM backup performance by up to 2X. This is because the performance of backup software installed in a VM is constrained by the overhead of network protocols such as NFS and CIFS and available bandwidth between backup software and backup storage; a NAS has no such constraints.. In addition, the price of such a setup can be 5X lower than the price of dedicated backup appliances.

5. Easy to Manage

Legacy backup solutions were developed in times when “functionality first, usability never (maybe later)” was mainstream, and this approach is in the very DNA of such tools. As a result, legacy backup products are overly complex, have User Guides 1,000+ pages long, and require regular maintenance or even professional services to run. At NAKIVO, usability gets the same high level of attention as functionality.

This is why NAKIVO Backup & Replication is praised by customers for simplicity and ease of use. “NAKIVO Backup & Replication is an outstanding product that offers great features and does not break the budget. The product saved us 35% of our management time,” said Stivan Chou of China Airlines.

6. Agentless VM Backup

Legacy backup solutions require agents to be installed in VMs in order to perform application-aware backup, file recovery, and recovery of application objects. Some legacy backup tools even require VMs to be rebooted after the agents are deployed on them.

There are several reasons agents are bad. More elements involved in creating a backup increase the probability that something will go wrong with a VM backup process. The higher chance of failure means more time spent on troubleshooting. Most administrators who have managed guest OS agents can attest to headache that results. The administrative overhead that comes with maintaining agents, installations, upgrades, etc., can be a nightmare. Moreover, agents put a higher load on the guest OS, which can lead to resource congestion at the time of backup and affect the performance of VMs. NAKIVO Backup & Replication does not require any agents to be installed within the operating system.

NAKIVO Backup & Replication integrates seamlessly with your virtual infrastructure and interacts with it without having additional software installed within the guest operating system. Even without agents, NAKIVO Backup & Replication is able to do all the important tasks that one would expect from a modern backup solution, including application-aware backups with log truncation, instant recovery, backup verification, etc.

7. Recovery Verification

Legacy backup solutions do not provide backup verification, leaving you vulnerable to potential data corruption and restore failures. NAKIVO Backup & Replication provides an automated way to near-instantly verify VM backups. After a VM backup is completed, the product can instantly recover the VMs, wait until the OS has booted, make a screenshot of the OS, discard the test-recovered VM, and send you a report with the screenshot via email. This way you have proof that backups are good and VMs can be recovered.

8. Backup Copy

VM backups can be damaged, get accidentally deleted, or become unavailable. To safeguard your valuable data, NAKIVO Backup & Replication provides Backup Copy jobs, which offer a simple and powerful way to create and maintain copies of your VM backups. You can configure Backup Copy jobs to suit your needs: run Backup Copy jobs on their own schedule, send backup copies offsite or to Azure/AWS clouds, maintain a mirrored copy of a backup repository or specify which backups get copied, when, and how. Legacy backup solutions do not provide such features.

9. Cloud-aware Solution

Legacy VM backup solutions were engineered before cloud integration became an important business consideration. In contrast, NAKIVO Backup & Replication is fully cloud-aware and allows for seamless integration between on-premises and public cloud backup infrastructure resources, allowing organizations to take advantage of today’s hybrid cloud capabilities. You can back up VMs directly to AWS and Azure clouds, or set up scheduled backup copy jobs that would send local VM backups to AWS or Azure clouds. Moreover, NAKIVO Backup & Replication can be launched in AWS as a pre-configured AMI to provide backup and replication for AWS EC2 instances.

10. Full Synthetic Data Storage

Most legacy backup solutions must perform periodic full backups, transferring the whole data set to the target datastore over and over again. This results in slow backups and an extra load on the environment. Even though some legacy backup solutions can create a synthetic backup out of available increments, they still require transforming recovery points in the backup repository into “synthetic full backups”, which does not resolve the issue, but simply shifts the load from the source to the target storage. NAKIVO Backup & Replication uses a full synthetic data storage mode, which eliminates the need for backup transformation. In this mode, each recovery point “knows” which data blocks are required to reconstruct a VM as of a particular point in time, thus eliminating the need for data transformation or the creation of increment chains.

11. VM Replication

With many of the legacy VM backup products, the only thing you get is a backup. You either have to provide your own replication solution, or pay extra for additional licensing that is required to unlock replication features. With native VM backup solutions, such as NAKIVO Backup & Replication, the replication features work out of the box for VMware, AWS EC2, and Hyper-V. VM replication creates and maintains identical copies of source VMs on target hosts. If disaster strikes, you can power on the VM replicas for near-instant disaster recovery. On top of that, you can save up to 30 recovery points for each replica and roll back to a good recovery point at any time.

12. Network Acceleration

WAN and LAN links are often slow or limited in bandwidth, which is why NAKIVO Backup & Replication can use compression and traffic reduction techniques to speed up data transfer. On average, this results in a network load reduction of 50% and a data transfer acceleration of 2X for VM backup, replication, and recovery. This feature is outside of the realm of legacy backup solutions.

13. LAN-free Data Transfer

NAKIVO Backup & Replication automatically uses LAN-free data transfer modes, Hot Add, and Direct SAN Access to bypass LAN, offload production networks, and significantly increase backup and recovery speeds.

14. Instant Recovery

NAKIVO Backup & Replication provides the ability to instantly boot VMs directly from compressed and deduplicated VM backups. This way, you can recover your VMs in seconds, without waiting for a full restore from backup files. Once a VM is running, you can migrate it to production for permanent recovery. NAKIVO Backup & Replication also enables you to browse, search, and recover files, Microsoft Exchange objects, Microsoft SQL objects, and Microsoft Active Directory objects directly from compressed and deduplicated VM backups, without using agents or restoring the entire VM first.

15. Backup Automation

API integration is all the rage in today’s world of automated operations. Being able to control, automate, and orchestrate VM backup, replication, and recovery tasks is a key component of automating daily IT operations. The APIs allow NAKIVO Backup and Replication to integrate with monitoring, automation, and orchestration solutions, which effectively reduces data protection costs. The ability to utilize API integration with a legacy VM backup solution is simply out of the question, which further emphasizes the disparity between legacy and native VM backup solutions.

16. Advanced BaaS and DRaaS

NAKIVO Backup and Replication delivers Backup-as-a-Service and Disaster-Recovery-as-a-Service. Multi-tenancy allows you to create multiple isolated groups, departments, or tenants within a single product deployment and manage them from a single pane of glass. This way, you can introduce backup-as-a-service to your service consumers more easily and cost-effectively. In the Multi-Tenant mode, tenants can access the self-service portal to offload backup, replication, and recovery tasks from the service provider.

17. Up to Date on Time

Legacy backup vendors are like old turtles – they’re slow. It can take them months to provide support for a new OS or hypervisor release, blocking infrastructure updates, and when the support is finally available, it may not fully cover all features. NAKIVO Backup & Replication is built using modern technologies, does not have to support a codebase from a century ago, and thus can provide much faster releases. NAKIVO stays up to date with the industry and has provided timely updates, including support for VMware vSphere v6.5, Microsoft Hyper-V 2016, AWS Cold HDD EBS volumes, Microsoft Windows 10, and Windows Server 2016.

18. Frequent Feature Releases

Legacy backup vendors usually use old R&D management approaches and thus have very long release cycles. As a result, it may take years for a legacy backup solution to add a feature that you needed yesterday. NAKIVO is fast and agile, and has delivered new features for customers every quarter.

19. Simple Licensing

The licensing model of legacy backup solutions is often complex, and requires that you license agents, features, sockets, and capacity. The licensing of NAKIVO Backup & Replication is simple and straightforward: you just need to license sockets on the source hosts for VM backup and replication. There is no need to license agents (because there are none), capacity (as it’s not restricted), or targets for replication and recovery.

20. Small Footprint

NAKIVO Backup & Replication requires just 2 CPUs and 4GB RAM for the entire solution and all its components to protect small and mid-sized environments. Deploying a legacy backup solution can be like letting a herd of hungry hippopotami in your environment.

21. Easy to Deploy and Scale

Legacy backup solutions are overly complex and often require weeks of professional services to be deployed. In contrast, NAKIVO Backup & Replication can be deployed in under 1 minute on Windows, Linux, and NAS, imported as a VMware VA or launched as an AWS AMI. Once deployed, the product can be easily scaled out to support large and distributed environments.

NAKIVO Backup & Replication at a Glance

NAKIVO Backup & Replication is a fast, reliable, and affordable VM backup solution for protecting VMware, Hyper-V, and AWS EC2 environments. NAKIVO Backup & Replication offers advanced features that increase backup performance, improve reliability, speed up recovery, and, as a result, help save time and money.

Deploy in under 1 minute

Pre-configured VMware VA and AWS AMI; 1-click deployment on ASUSTOR, QNAP, Synology, and WD NAS; 1-click Windows installer, 1-command Linux installer

Protect VMs

Native, agentless, image-based, application-aware backup and replication for VMware, Hyper-V, and AWS VMs

Reduce backup size

Exclusion of SWAP files and partitions, global backup deduplication, adjustable backup compression

Increase backup speed

Forever-incremental backups with CBT/RCT, LAN-free data transfer, network acceleration, up to 2X performance when installed on NAS

Ensure recoverability

Instant backup verification with screenshots of test-recovered VMs, backup copy offsite/to the cloud

Decrease recovery time

Instant recovery of VMs, files, Exchange objects, SQL objects, Active Directory objects; DR with VM replicas


Founded in 2012, NAKIVO is a US corporation, which develops a fast, reliable, and affordable data protection solution for VMware, Hyper-V, and cloud environments. With 20 consecutive quarters of double-digit growth, 5-star online community reviews, 97.3% of customers happy with support, and more than 10,000 deployments worldwide including Honda, Coca-Cola, China Airlines, Microsemi and many, many others, NAKIVO is one of the fastest-growing data protection software vendors in the industry. NAKIVO has a global presence with over 2,000 channel partners in 124 countries worldwide. Visit www.nakivo.com to learn more.

Are you a BCM practitioner in the middle of configuring your BIA questionnaire? Are you scratching your head over the part about identifying the impact categories that are most relevant to your organization?

If so, you are not alone. This is a famously confusing aspect of the Business Impact Analysis. It is also a highly important one.

Impact categories are the aspects of your business that you will be looking at to determine the negative effects of disruptions of varying lengths.



A Crisis Simulation exercise is a great opportunity for a team of professionals to come together and address gaps and issues they may have when it comes to an event.

It is a controlled environment for you to understand individual skillsets during a crisis and how the organization can communicate and coordinate during one.

As a tool, tabletop exercises (also known as TTX), when run effectively and correctly, can be extremely impactful not just to the team, but the organization.

However, as a facilitator of these events, you only have one short window to attain interest and commitment for future sessions. This is a vital step in your journey to building a culture of preparedness.

The preparation to get to a positive and effective level is simple. We recommend five key steps:



Tight supply-high demand has been a running theme in top US data center markets in recent years. Supply has been especially tight in Chicago, where real estate brokers say almost zero new data center capacity was delivered last year.

That’s about to change. Numerous new construction projects kicked off last year, and January saw news of several more that are in the works. ComEd, the utility that serves the area, announced in November a project to expand its Itasca substation so it can handle the massive load all the new data centers are about to add to the grid.  

If recent wholesale data center leases in the Chicago market are any indicator, the demand is there. Last year, Apple leased 14.5MW in Chicago from DuPont Fabros Technology (now part of Digital Realty Trust), and Two Sigma leased 2.2MW from CyrusOne, according to North American Data Centers. T5 Data Centers announced earlier this month a pre-lease by an S&P 500 client in the second phase of its Chicago build-out that’s currently underway.



The BIA is always a hot topic in business continuity. Everybody wants to know how to do a Business Impact Analysis (BIA).

This interest is not misplaced, since the BIA is a critical part of an organization’s business continuity program.

Here’s our Business Impact Analysis (BIA) definition: A BIA provides you with a clear picture of the criticality of your business operations based on the processes they perform, and helps you identify the dependencies (i.e., the computer systems, vital records, etc.) that must be in place for those processes to run. In essence, it serves as the foundation of any good continuity strategy. Once you understand which business processes are most critical to the livelihood of your company, you can then use this information to build an effective strategy that addresses only those areas that need to be recovered and the designated time frame in which to recover them.

However, it is important to remember that the BIA is a waste of time if the organization neglects to use the results to correctly define and establish Recovery Time Objectives (RTOs).



As we move closer to the enforceable compliance date of May 25, 2018 for the General Data Protection Regulation (GDPR), many organizations are asking themselves if they are on track to meet the regulation requirements. Many organizations are still unsure if the regulation even applies to them. Given the severity of potential penalties for non-compliance greater of €20 million or 4% of revenue for non-compliance with core tenets of GDPR, such as violation of data subject rights or transfers of data to unauthorized third countries), this perspective covers who GDPR applies to and the key items you should explore in your organization to ensure you are prepared.



BARCELONA — Over the next five years, strong growth in the number of mobile users, business digitization demands, Internet of Things (IoT) connections and mobile video consumption will place significant challenges on networks. The increasing traffic has yet to show an equivalent growth in ARPU.

As service providers prepare for the next wave of network speed, extensive architectural transformation involving programmability and automation will be needed to support these capabilities and future innovations, including the evolution of enterprise services, 5G, and IoT.

Orange knew it could reduce CapEx and OpEx by implementing new architectures on platforms ready for mass-scale networking and by automating a large number of tasks and operations. The company has taken the initiative to improve its business efficiency by deploying the Cisco® Network Services Orchestrator (NSO) software platform to its current and future network as a foundation for infrastructure programmability and automating method of procedure (MOP) operations and customer-facing services.

As a key technology enabler, Cisco NSO will help Orange and its subsidiaries realize new benefits, including:

  • Providing a highly efficient abstraction layer between network services and the underlying infrastructure components, even in complex, heterogeneous environments
  • Reducing service activation times from days to hours and dramatically increasing TTM for critical service offerings
  • Automating its service lifecycles and reducing manual configuration steps by as much as 90 percent across the spectrum of mobile and enterprise networks, including zero-touch provisioning of network devices
  • Empowering Orange teams along their journey toward SDN and NFV through use of an open, modern programmable platform
  • Reducing failed service activations and network issues by removing risk of human error

“Cisco’s model-driven approach to network automation and service orchestration is enabling Orange to drastically speed delivery of services across our entire lifecycles,” said Christian Gacon, vice president, Wireline Networks and Infrastructure, Orange. “Global deployment of Cisco NSO also provides uniform configuration management tools and consumable network APIs for business applications and customer self-service portals.”

“Visionary service providers like Orange recognize the value network automation and SDN offer to drive innovation in their markets,” said Yves Padrines, vice president, Global Service Provider EMEAR, Cisco. “Cisco’s network automation software and product portfolio enables carriers to simplify their operations through sophisticated data analysis and proactive control, helping them to continue delivering superior customer experiences without interruption.”

Cisco is leading the disruption in the industry with our technology innovations in systems, silicon, optics, and security and our unrivalled expertise in mass-scale networking, automation, optical, cable access, video, and mobility. Together with our portfolio of professional services, we can enable service providers and media and web companies to reduce cost and complexity, help secure their networks, and grow revenue. 

Supporting Resources

RSS Feed for Cisco: http://newsroom.cisco.com/dlls/rss.html

About Cisco
Cisco (NASDAQ:CSCO) is the worldwide technology leader that has been making the Internet work since 1984. Our people, products, and partners help society securely connect and seize tomorrow’s digital opportunity today. Discover more at newsroom.cisco.com and follow us on Twitter at @Cisco.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company.

Data protection has suddenly become hot once again. After years languishing as an unexciting item on every storage managers to-do list, it has once again moved to top of mind. This may be a sign of the times: growing cloud adoption, the rise of the distributed enterprise and the demand for analytics are all driving data protection to the forefront. Here are some of the top trends in this area.

Data Centricity

Bob Hammer, CEO of Commvault, believes the market is undergoing a fundamental shift. The traditional role of IT was to put up the infrastructure, then figure out how to fit in the apps and the data. But that may not work anymore.

“We are seeing a shift from an infrastructure-centric to a data-centric view,” said Hammer. “In the past, data was centralized and now it is distributed.”



(TNS) - Public safety officials will soon be able to microtarget areas for cell phone alerts during natural disasters after the Federal Communications Commission on Tuesday approved changes to the nation’s emergency communications system.

The approved upgrades to the Wireless Emergency Alerts system will allow public safety officials to send alerts to all the cell phones in areas as small as one-tenth of a mile in radius -- or about the size of Minute Maid Park -- once the new rules are adopted by the November 2019 deadline also approved by the FCC.

Previously, alerts could only be sent to all cellphones in a specific county -- a problem that was laid bare by Hurricane Harvey and the historic California wildfires.



Data and applications dealing directly with financial transactions, customer information or intellectual property tend to benefit from more security attention. Others, buried inside an enterprise, invisible to many yet vital to operations, may not get the same consideration, either from entities using them or vendors selling them. Enterprise resource planning (ERP) systems are an example. However, as vendors now seek to extend functionality adding in modules to facilitate supplier collaboration and CRM to tie customer demand closer to factory schedules, ERP may be among the most exposed systems with the highest impacts in case of failure.

What consequences could the breach of an ERP system breakdown have? Impacts range from production stoppages and sabotaged quality, to theft of customer and payment information. There are also the follow-on consequences for business partners if the ERP system is used as a stepping stone for other attacks to systems connected from the outside. Nonetheless, security continues to take a back seat in many ERP implementations. Vendors are too busy playing catch-up with new functionality. Enterprises using the ERP systems are often obsessed with increasing productivity and profitability, without thinking about safeguarding these items now or into the future.



Wednesday, 31 January 2018 15:43

Information Security and ERP Systems

As part of its Resiliency program, one of our clients recently performed their Annual Disaster Recovery test in which they failed over their production data center to a backup data center. The test was scheduled for 96 hours (4 days) to restore their Tier 0 Mission Critical services, and involved 43 Applications, 17 different Infrastructure teams, and 32 Client test teams.

This year our client wanted to Automate the DR Test workflow (task allocation, status monitoring, successor alerts and issue management) and deploy Real-Time Analytic Dashboards to keep their senior managers updated on test progress.  Deploying eBRP’s CommandCentre to manage that automation, 108 Plans were activated and during the Recovery testing, more than 211 Recovery Team members and 6 Incident Commanders logged in to collaborate and facilitate the recovery efficiently.



Tuesday, 30 January 2018 15:27

Disaster Recovery -- Exercised

Two weeks ago, I took a long-awaited trip to Walt Disney World in Orlando, Florida. I’ve been there several times, but I’m amazed at how the Disney experience has changed over the last five years. Today, the world of ‘all-things Disney’ is so much easier using the “MagicBand,” a plastic watch-sized bracelet equipped with an RFID radio that tracks your progress through the parks, monitors your purchases, keeps up with wait times and even opens your hotel door (if you’re staying on-site). I must say, though, it’s a little odd knowing that Disney is watching your every move, tracking how much you spend and where you spend your time.

Our “digital footprint” is much the same; everywhere we go, and everything we do, is tracked. We’re monitored on the internet, through our smart phones, and on cameras placed virtually everywhere. While 68 percent of consumers say they don’t trust brands to handle their personal information appropriately, last year was a record-breaker in terms of data breaches at such places as Equifax, Verizon and Uber. Sadly, we’re never more than a double-click away from disaster.

The good news is, today is the perfect time to take inventory of your digital presence and make sure you’re doing everything possible to protect your personal information. Data Privacy Day(#PrivacyAware) is an international effort held annually on Jan. 28 to create awareness about the importance of respecting privacy, safeguarding data and enabling trust. Sponsored by the National Cyber Security Alliance (NCSA), 2018 marks the tenth anniversary of this annual effort to bring together businesses and private citizens to share the best strategies for protecting consumers’ private information.



Did you know that one of the biggest cybersecurity threats to your business is your employees?

Before you call an emergency meeting to identify the culprits, note one important fact. These employees most likely have no idea that their online activity can lead to cyber fraud. Here at OnSolve, we have delved into the ways that employees create risks to companies. Along the way, our research has identified the most likely cyber risks for every month and season. Let’s touch base on a few of these key points.

Employee Cybersecurity Concerns

As a human-powered organization, you need to hire people to handle tasks that keep your business up and running. Through professional recruiting and vetting processes, you hope to hire individuals who are trustworthy and committed to cybersecurity. In fact, your organization is most likely already doing exactly that. The most common cyber breaches are a result of human error, not intentional ill intent.



Software selection can be daunting. There are plenty of uncertainties and questions you have when looking to implement Enterprise Risk Management (ERM) software.

Maybe you’re not sure what to be concerned about. Maybe you’re not sure how the process works, or the hidden costs.

Here are eight questions that you shouldn’t hesitate to ask your current or potential ERM vendor:



You can’t check in for your flight on the airline’s app. The website won’t let you buy the plane ticket you wanted. The app can’t tell you whether your flight is on time.

Unfortunately, technology glitches and outages like these are all too common. In 2017 alone, there were six major U.S.-based airline outages caused by IT failures. We all rely on services that make our lives easier, often seamlessly. But all of them depend on IT, and IT can—and often does—fail.

How do you typically book tickets for air travel?

That’s an issue with severe consequences for airlines, especially since 84 percent of American travelers in a recent survey say they use an airline’s website or mobile app during the travel process.



Last week we talked about the importance of finding out management’s risk tolerance and creating a business continuity program which will keep risk for the organization within those limits. Today, I thought I’d get more specific about how you go about doing that by discussing the five most important risk mitigation controls within your business continuity plan.

The way to limit the risk in your program is by implementing measures to limit the adverse effects of potential events: risk mitigation controls.

Here’s an example of how mitigation controls play a role in your everyday life: When you tell an ATM how much cash you want and receive that exact amount—with the withdrawal being accurately noted on your statement—this comes about because of a whole series of mitigation controls that have been put in place by the bank. These controls are meant to accurately manage and track cash disbursements.

In risk management, mitigation controls provide a parallel type of control over risk.



The false ballistic missile alarm in Hawaii that panicked the island’s residents and visitors for 38 minutes prompted some self-evaluation among state offices of emergency management on the mainland and a hard lesson learned.

The alert, issued by the Hawaii Emergency Management Agency on Jan. 13, warned of an inbound missile and wasn’t corrected until 38 minutes later. It happened because a staffer had simply clicked on a wrong link.

“Could it happen here?” was a common refrain after the blunder, and the most common response was that human error does happen, but protocols and redundancies are in place to mitigate that.



Organizations today are incredibly dependent on both their computer systems and the information that they store. Data continues to grow exponentially and, despite the current economic climate, the demand for data storage has not declined.

In fact, data storage has become a hot topic of discussion, as organizations contemplate how to store the mass amounts of data they’re generating. Trends such as increases in user-created content, coupled with mounting pressures from regulatory compliance, are causing society to store more information over longer periods of time.

Simply adding more storage capacity to keep up with this data growth is no longer an acceptable strategy for organizations faced with constricting budgets, physical floor space, and power and management resources. Enter the need for a modern storage solution, built on principles of high density and low power consumption.



(TNS) - On a rooftop 18 stories above a busy Kansas City, Mo., street, it’s 16 degrees, the wind’s blowing and here comes a bunch of bundled-up guys who, fittingly, want to see what some might call a monument to madness.

And there it sits atop this old building on Independence Avenue: a Chrysler Victory Air-Raid Siren. Twelve feet long and 3 tons. If Soviet missiles had ever come over the pole headed our way, somebody would have fired up the siren’s V8 hemi engine and blasted 138 decibels over 25 miles.

“They must have found a deaf guy to operate it,” Stephen Bean, a coordinator with Kansas City’s emergency management office, yells to the others as they looked at the siren that had a hand-operated lever.

The siren was so loud, legend has it, it would set grass on fire.



An open platform to access and share information for everyone, everywhere has been the fundamental role of the internet since its inception. Sadly, open internet has come under attack. The recent Federal Communications Commission ruling to repeal net neutrality protections means that communication networks in the US have become more open to abuse by large corporations. The internet was originally envisioned to be peer-to-peer with no dependency on any central entities. From now on, major telecommunications companies will have the power to decide what content runs over “their” networks. Telecom companies will be able to prioritize different types of internet traffic, block access, slow down or speed up services as they wish.

This new state of the internet proposed by the FCC is an attack against our modern social fabric. It is undeniable that open internet has improved the lives of billions of people all over the world. Revolutionary products and services have been possible thanks to equal treatment of all internet traffic. An open internet has enabled innovative start-up companies to create new products and services and pave the path for continued technological and social progress. Now, the FCC has decided to hamper progress and economic opportunity and hand the keys to our internet to a few dominant players. Without net neutrality, a few large corporations can restructure how the Internet works and potentially slow down the pace of technological and social progress.



(TNS) - Jordan Bond is proof that good ideas can come from unexpected places. For Bond, the place was Norway and the idea was a life-saving mobile phone application.

“I noticed a medical study in Norway that studied whether people survived better if citizens did CPR (cardiopulmonary resuscitation) versus if they didn’t, using an app,” said Bond, a senior firefighter/medic with the Newport News Fire Department the past eight years. “I think somehow that developed into PulsePoint a few years later.

“As soon as I saw PulsePoint, I was like, that needs to come here.”

Bond pitched the app to a committee of department firefighters, then to Fire Chief R.B. Alley III and R.E. Lee, the assistant fire chief of medical services. All approved and the NNFD announced the launch of PulsePoint in a news conference Tuesday at Fire Station No. 3.



There are many good reasons for an organization to relocate its data center; however, there is only one good way to go about executing such a move and that is carefully.

Successful data center migration does not require that any of the people involved be a genius. It does require patience, effort, meticulous planning, and the ability to tease out the complex dependencies among your applications.

MHA has assisted many organizations with relocating their data centers. In today’s post, we’ll provide some tips and considerations that might be helpful to anyone whose organization is contemplating a data center relocation or is in the planning stages of one.



(TNS) - Local officials said coastal Whatcom County residents were never in any danger despite tsunami alerts that were issued for much of the west coast of North America early Tuesday in the wake of a powerful Alaskan earthquake.

“No, we were not under any real risk, but once again there were lots of inconsistencies between the National Weather Service and Environment Canada,” said John Gargett, deputy director of the Whatcom County Sheriff’s Office Division of Emergency Management.

A tsunami warning and a tsunami watch were issued from the western tip of the Aleutian Islands in Alaska to San Diego in California in the wake of a 7.9 magnitude earthquake that struck at 1:32 a.m. PDT about 170 miles southeast of Kodiak Island in the Gulf of Alaska.



Thursday, 25 January 2018 15:19

Wave of Confusion Follows Tsunami Alert

(TNS) — Lynn Lu flipped on Java Beach Cafe’s lights at 5 a.m. Tuesday and got to work opening the small San Francisco coffee and bagel shop just half a mile away from Ocean Beach.

She turned on the coffee pots and cash register, took down the chairs from the wooden tables and opened the doors at 1396 La Playa St.

In Lu’s mind, it was business as usual.

That is until her first customer came in for their morning cup of coffee and told her a tsunami watch had been called in the early morning hours for the coast of California.

“If I’d known about it, it would’ve been scary,” said Lu, 56, who lives by the San Francisco Zoo. “I didn’t know anything about it this morning until a customer told me. When I heard it was canceled, I kind of relaxed.”

A tsunami watch was issued in California after a 7.9 earthquake hit at 1:32 a.m. in the Gulf of Alaska. The National Tsunami Warning Center canceled the tsunami watch at about 4 a.m. but said some areas might still see some sea level changes. The San Francisco Department of Emergency Management advised residents along the coast to remain cautious.



(TNS) - The lives of EMS workers often revolve around high tension and stress.

They're often the first to respond for patients during their moments of greatest need, be it a heart attack, motor vehicle crash or other emergency.

Being sharp and on their game is paramount.

“We are required as clinicians in the field to make decisions quickly in austere environments, without much information,” said Daniel Patterson, a paramedic for Parkview EMS in O'Hara. “You have to be clear and alert to make those decisions.”



Wednesday, 24 January 2018 15:44

Guidelines Hope to Help Tired EMS Workers

Cautionary Tales from Morgan Stanley and the U.S. Olympic Committee


From big corporations to non-profits, from government entities like the U.S. Congress to schools and universities, all types of organizations are firing high-profile men for sexual misconduct. Whether the firings are justified or not is a big question, but either way, one of two crises will inevitably follow.

If the accused man is found to be innocent, the organization has trashed its own reputation and brand, showing itself to be unprincipled and dishonorable. After all, it recklessly almost succeeded in ruining an innocent man’s life. That’s crisis number one.

If, on the other hand, the man is found to be decisively guilty of sexual misconduct, particularly if it was found to have occurred over a prolonged period of time, that’s crisis number two: The firing organization will be investigated for possibly having covered up or ignored reports of the bad behaviors.



Catastrophe risk modeling firm RMS puts insured and reinsured losses stemming from the December Thomas fire in Southern California at somewhere between $1 billion and $2.5 billion, reports the Artemis blog.

The fire, which started on December 4, became the largest in California history and was followed by devastating mudslides in burned areas stripped of vegetation.

RMS estimates include losses from burning or smoke damage to personal and commercial lines and insured losses due to business interruption and additional living expenses. They don’t include automobile and agriculture losses, or damage related to the recent mudslides.



Mahoning County is located on the eastern edge of Ohio at the border with Pennsylvania. It has a total area of 425 square miles, and as of the 2010 census, its population was 238,823. The county seat is Youngstown.


  • Eliminate application slowdowns caused by backups spilling over into the workday
  • Automate remaining county offices that were still paper-based
  • Extend use of data-intensive line-of-business applications such as GIS



Cybercrime will cost the globe’s businesses more than $2 trillion by the year 2019, according to a report from UK-based market analyst firm Juniper Research.

It’s hardly a surprise that so many companies include cyber threats at the top of their list of risks. And yet shockingly few have taken adequate measures to mitigate the potential dangers of data breaches and other cyber-related risks. Until now, that is. The Wall Street Journal recently reported on a trend within the manufacturing industry toward widespread adoption of cyber insurance. Here’s a closer look at the issue, along with why cybersecurity insurance offers critical protection for 21st century businesses.



IT executives are more fired up than ever about using public cloud and data initiatives to differentiate their businesses, according to a recent 451 Research survey. 

In its first-ever Voice of the Enterprise (VotE) Digital Pulse survey, released this week, 60 percent of more than 1,000 IT leaders surveyed report that they plan to run the majority of their IT off-premises by year-end 2019; that includes public cloud and SaaS. For this year, their top three IT initiatives are business intelligence, machine learning/artificial intelligence and big data. 

This quarterly survey signals where partners should focus their efforts. For starters, there’s the increasing importance of data-centric technologies. 



(TNS) - When news of the Hawaii missile alert mistake broke last week, it immediately brought a sense of déja vu to local emergency management officials.

A similar — though not as alarming — incident took place here last September in the aftermath of Hurricane Irma. Volusia County officials were on a countywide conference call with officials from various cities when everyone's cell phones began sounding off with an "extreme alert."

"Volusia County boil water notice. Residents are advised to boil water before consumption," warned the notice. It was a mistake.

The warning, issued in error by a state employee, created hours of confusion as officials tried to figure out what was going on and notify people there was no need to boil their water.

"Thank God it wasn't an inbound missile alert like the mistake made in Hawaii," Jim Judge, Volusia County's emergency management director, said last week.



Anyone following enterprise data storage news couldn’t help but notice that aspects of the backup market are struggling badly. From its glory days of a couple of years back, the purpose-built backup appliance (PBBA), for example, has been trending downwards in terms of revenues per IDC.

"The PBBA market remains in a state of transition, posting a 16.2% decline in the second quarter of 2017," said Liz Conner, an analyst at IDC. "Following a similar trend to the enterprise storage systems market, the traditional backup market is declining as end users and vendors alike explore new technology."

She’s talking about alternatives such as the cloud, replication and snapshots. But can these really replace backup?



Getting caught in an emergency situation without a solid and well-thought-out plan puts stress on your residents and employees.

Every moment matters in a crisis, and you need to help your staff react as professionally and promptly as possible. Avoiding common mistakes through preparation and follow-through will help make your emergency communication strategy more resilient — allowing you to keep people safe during a crisis.

Download the Seven Deadly Sins of Emergency Notification to avoid common mistakes.

Threats to life and property, both manmade and natural, are around every corner these days. From shootings to bomb cyclones and mudslides, it’s especially important that government entities are able to keep a tight handle on communications during a time of crisis. Here are some common pitfalls to avoid:



(TNS) - Washington and Idaho are in the midst of what’s likely to be the worst flu season since the 2009 swine flu pandemic.

Forty Washingtonians died from the flu in the past week, according to the Washington Department of Health, nearly doubling the total flu season count to 86 deaths.

That includes 15 deaths in Spokane.

A total of 181 people have been hospitalized in Spokane County so far in January. If that trend continues, hospitalizations should easily surpass the previous record of 231 in January 2015.



To kick off the new year, industry experts and hosts of our new podcast, The Watchdog, Brian McIlravey and Tim Chisholm sat down to chat about their forecasts for the shifting risk and security landscape this year and how practitioners can stay ahead of the curve. Read the full guide to the top corporate security threats of 2018 here.

Prefer to listen? No problem! Tune in to the episode on iTunes.

Tim Chisholm: All right. It’s a new year, Brian.

Brian McIlravey: It is, Tim. It’s 2018. How do you think the planet is this month, Tim?

Tim Chisholm: The planet has maybe been in better shape before. But what do you think? Where are you sitting? How are you feeling?

Brian McIlravey: There are all kinds of different charts on the top security risks that pop out for 2018, and they’re all very similar. But in terms of Resolver’s guide to the top risk and security trends of 2018, I went through a bunch of them and found some patterns that were very interesting. What I’m going to do is focus it down to two that I think are very prevalent. One that’s been common going back to probably about 1811 is natural disasters. I mean, there’s some risks that we know are going to be on this list every single year. But there was an article that came out about the planet and natural disasters that I found especially fascinating – 2017 was the most costly U.S. disaster year on record just in terms of the massive, massive amount of billions spent—which you might expect given the significant disasters that happened this year.



What would cause more damage to your business? A hurricane or a cyber attack?

If you said the latter, you’re in good company.

Even after the costliest hurricane season of all time in the U.S., 74 percent of business leaders we surveyed said they consider a data breach, hack or cyber attack a greater business risk than a natural disaster.



When your organization isn’t risk literate, the result can often resemble a horror movie; when it is, you can save the day.

In some ways, being a business continuity management consultant is a lot like watching a horror movie. How?

Well, do you know how in horror movies people are always doing things that you know are liable to get them killed, but that they do anyway—despite your yelling at the screen for them to run the other way—because they are lacking critical information that you’ve been given by the director?

It’s the same for a BC consultant. I repeatedly see organizations doing things that I know are harmful to their long-term best interests, based on things I’m aware of that they are not, despite my yelling at the screen (figuratively speaking, of course) and urging them to turn aside from their intended course of action.



(TNS) - Sedgwick County, Kan., has a system for warning its residents about an impending nuclear attack.

You’ve probably already heard it.

Last time the county tested it in 2017, people called asking why the tornado sirens sounded weird.

The warning sirens in Sedgwick County have two different modes: The alert mode, a steady tone used for tornadoes and tested most Mondays at noon, and the attack mode, a classic rise and fall sound used for air attacks.

The second tone is the one that would likely be used in the event of a missile attack, said Cody Charvat, interim emergency manager with Sedgwick County.

Technically it can be sounded anytime the United States is under attack.



Vertiv announced this week that it has acquired the privately owned custom air handling manufacturer Energy Labs for an undisclosed amount. Evidently, Platinum Equity -- Vertiv's owner since it bought it as Emerson Network Power about a year ago -- meant it when it said its focus would be on long-term growth for the company and not on squeezing maximum short-term profits by putting it on a starvation diet.

There was room for doubt. In July, just months after Peter Panfil, VP global power at Vertiv told Data Center Knowledge about Platinum's approach, Vertiv sold ASCO, its automatic transfer switch business, to Schneider Electric in a deal worth $1.25 billion.



While just about every business is shifting in some shape or form, the regulatory compliance industry is undergoing a revolution. Keeping pace with legislative changes, consumer behaviour, and technological advancements has become very challenging for many Canadian financial institutions.

As new (and old) technology continues to disrupt the industry, we wanted to take a closer look at the biggest trends and growth areas for 2018.



Public, private and hybrid cloud environments are more accessible than ever, allowing organizations to scale more quickly and efficiently. With more companies migrating to the cloud, concerns for data protection also increase. From safely enabling SaaS applications to securing physical data centers, the adoption of cloud technologies will, in large part, depend on the perception of the cloud as being “safe.”

While cloud technologies offer efficiencies of cost and scale, they also create a challenge in securing the hybrid world of on-premises, platform and cloud security. The reality is that as cloud adoption continues to rise, the value of the data stored within the cloud rises as well, and cybersecurity threats aimed at the cloud will increase in intensity and frequency.

This makes it critical that today’s organizations have a plan in place for hiring elite cloud talent who are prepared tackle tomorrow’s toughtest cybersecurity challenges, because use of advanced cloud technologies will require a security strategy that matches the requirements of cloud products and services themselves.



Thursday, 18 January 2018 16:03

Hiring Cloud Talent Will Improve Cybersecurity

(TNS) — Re-entry after a Category 4 disaster like Hurricane Irma can affect public safety in the form of homes being looted and also hinder efforts to restore utilities, said Monroe County Emergency Management Director Marty Senterfitt.

Senterfitt stood his ground on the mandatory evacuation order that was in place before the Sept. 10 Category 4 storm hammered Big Pine Key and other spots in the Florida Keys. Those who survived unscathed were lucky, he suggested.

"That's like putting a gun to your head and when you win you say, 'I'm smart,' " Senterfitt told a crowd of more than 100 Lower Keys residents at a two-hour meeting Monday night at Keys Community Church.



Many predictions these days center upon Artificial Intelligence (AI). We are told AI will impact every aspect of society. All facets of our lives will be enriched by AI technology. And of course, AI will pervade each and every element within the data center.

This may be true – eventually. But please note that Spielberg’s “AI” movie came out in 2001. Despite the AI hype, not much has changed in that time. And speaking of 2001, the Kubrick film of that title was released in 1968. Fifty years later, where’s HAL? The best we have is Alexa being able to tell us the weather or play a few tunes.

So let’s get practical about AI in the data center. What kind of tangible impact is it having NOW in terms of storage, applications and security that a data center manager needs to know about? In other words, let’s not worry about future potential – how can AI help immediately in the data center? And what should the data center manager be doing about it?



Family-owned and operated businesses hold a special place in our economy and social fabric. They occupy a large place as well, accounting for nearly 20% of all businesses. One common perception is that this distinct place is safe from employee lawsuits and the need for Employment Practices Liability Insurance (EPLI).

However, the reality told by EEOC litigation and reported settlements is far different. Family businesses are as vulnerable as any other type of organization, for a number of reasons.



Welcome to 2018! We’ve already been gearing up to tackle new challenges, more connected users, and an ever-evolving digital world. Many of the broader trends are having an impact on the modern data center. A few key projects that started over the course of 2017 will help shape the data center over the next few years.

In 2018, new solutions and concepts around data center architecture will force business leaders and IT professionals to think differently when it comes to helping the data center run more optimally and create competitive advantages.

Today, we look at those considerations, technologies, and new solutions. Remember, as the data center shifts to support more digital strategies, the business will rely on the capabilities of your IT ecosystem to support new initiatives. As a note, the following technologies are projects that are currently dominating the list. These solutions aim to have the greatest impact without completely disrupting functionality.

With that, here’s the list of data center trends that everyone should be aware of for 2018:



Thursday, 18 January 2018 15:55

Be Aware of These 5 Data Center Trends in 2018

Following the news of Hawaii’s false ballistic missile alert on January 13, 2018, we sat down with crisis & emergency management expert, Kevin Hall, to get his thoughts on what went wrong and why.

To start us off, tell us what happened over the weekend in Hawaii? 

On the morning of Saturday, January 13th, 2018, people in the state of Hawaii received an alert message on their phones that read, “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”

The alert went out at approximately 8:07am and was issued by the Hawaii Emergency Management Agency (HI-EMA). According to the official report from the state, the activations included the Emergency Alert System and the Wireless Emergency Alert System, but from what I can gather, it seems that the alert was only sent through the wireless medium. It is interesting to note that no sirens were activated as part of this alert. 

How did that happen? What processes are involved in sending an emergency notification of that scale?



Evolve 14 Plunger FINALBy DON MENNIG

Before I share the DR problems Evolve IP identified in our 2018 Disaster Recovery Survey I have a couple of writing caveats.

Caveat #1.) I’m not a ‘New Year’s Resolution’ kind of guy

Caveat #2.) I really dislike clichéd content – I don’t need to read “5 Reason’s To Wash My Windows” … I know they are dirty.

Caveat #3.) I often times find myself in the minority J!

So, for those of you that like resolutions and “Top 5” lists I am pleased to present …

“The Top 5 Disaster Recovery Resolutions for 2018!”

Click-bait ads coming to the bottom of a news website near you ;)

Now, unfortunately in all seriousness, our survey uncovered some very distressing disaster recovery statistics that need to be addressed by organizations before it’s too late.

Resolution #1: Complete your DR plan and implement and test your plan.

Yeah, you’ve heard it before, but just like giving up your penchant for deep fried Twinkies some things never seem to get done and this year’s survey again proves the point! Only 31.5% of our nearly 1,000 respondents (IT professionals and C-level executives) noted that they had a complete DR plan! Perhaps even more alarming, of the 68.5% that did not have a complete plan, four in 10 had a plan that they felt was less than three-quarters of the way complete.

As you can likely imagine, rolling out an incomplete plan to the organization might seem odd and it’s likely causing DR plans to remain on the shelf. Only 2/3rds of respondents had formally implemented their plan in the business.

To continue with the Twinkie bashing, we all know that chowing down on the cream-filled, artery-clogging sweetness could potentially be really bad for you down the road. You also know that having an untested DR plan is bad for your businesses health. Our survey revealed that less than half of the firms had actually tested their DR plan in the last year.

Then again what are the chances your organization will actually need a data defibrillator in 2018…?

Resolution #2: Don’t get stuck in denial

Turns out, the chances are pretty high. Based on our survey results you need to change your mindset about DR: it isn’t if a DR incident will occur, but when. Over 1/3rd of participants noted their organization had suffered an incident that required disaster recovery. And while hardware failure was the leading cause of incidents (noted by 50%), deliberate attacks are getting worse and growing faster than any other category.

In 2017, the number of respondents reporting that deliberate attacks had caused DR incidents increased to 17% compared to 13% in 2016 and 6.5% in 2014! You might take all of the precautions in the world against attacks—constantly changing passwords, deploying aggressive security software, implementing secure file sharing and more —but hackers are getting smarter every year and your associates are still human and make mistakes

Resolution #3: Treat DR as though you have compliance requirements.

Even if your business does not have compliance requirements, it would likely benefit from acting as though it does. Of organizations that had suffered from an incident that required DR:

• 43.5% without compliance requirements took more than one business day to recover their IT operations.

• Just 28% of those with compliance requirements took more than one business day to recover their IT operations.

 Resolution #4: Fight for your DR budget.

As you’d probably expect, companies that budget sufficiently for DR are more likely to feel very prepared to fully recover from an incident. In fact, 65.5% of those firms noted they felt “very prepared”. Not so much for the under-funded. Just 1 in 5 of underfunded firms were feeling “very prepared” to handle a DR situation.

What sets off the alarm bells here is that four in 10 IT professionals felt that their organization had underfunded DR. Interestingly, three in 10 C-level executives agreed. Couple those numbers with a 33%+ likelihood of a DR situation arising in the future and you’ve got the potential for a major problem!

So, how do you fight for budget? Share some of these survey results with your executive team along with a document that quantifies just how much an outage will cost your business in terms of lost sales and productivity! We created a simple downtime calculator to help you determine what it will cost you.

Resolution #5: Evolve Your DR Strategy

Far too many organizations continue to use legacy or unsecure approaches to DR introducing un-necessary risk and greater chances of failure. A couple of statistics really jumped out:

• 38% of firms relied on servers and hardware at the same location as the rest of their infrastructure

• 35.5% of firms use tapes for backup 

• 22% relied on public cloud for DR

If you’re relying exclusively, or primarily, on one of the methods above, take the time in 2018 to begin researching other solutions like DRaaS from providers such as Evolve IP or investing in a private, secure, secondary site that is geographically distant from your primary location.

Happy New Year! I wish you, your families and associates success and good health in 2018.

To learn more about Evolve IP’s suite of DR solutions visit www.evolveip.net/draas-suite.

Don Mennig is the senior vice president of marketing for Evolve IP.

Nuclear war, cyberattacks and environmental disasters top the list of man-made threats to global stability in 2018, according to a survey of 1,000 international leaders from business, government, education and service groups.

Another global financial meltdown, more likely in past years, has ebbed because of economic expansions underway worldwide, the annual World Economic Forum's Global Risks Report found. It was released Wednesday in advance of the forum's meeting in Davos, Switzerland, next week.

Mother nature topped the most significant risks facing the world for a second year in a row, the survey showed. They include natural disasters and extreme weather events that human-caused climate change may be abetting. 



Here’s something for your to-do list, if you’re not doing it already: The next time your organization holds cyber exercises, make sure you include third-party experts, bringing them in to observe, share insights, and provide feedback.

Experts such as law enforcement officers, data security consultants, your insurer, and public relations professionals can provide valuable insights that will strengthen your cybersecurity plan and better prepare you for a real-life emergency.

In today’s article, we’ll lay out who might be good to invite to your next cybersecurity party and what each type of expert can contribute. We’ll also sketch out how exactly you go about reaching out to these busy professionals and securing their participation.



As a woman who is still relatively new to the business continuity world, I thought it would be interesting to write a blog around ladies within this industry.  Shortly into my research, an apparent pattern emerged in my search results:  there is a gender gap in business continuity management- women are perpetually underrepresented.  Gartner released a survey to almost 400 executives that revealed the ratio of men to women within business continuity management is almost 5 to 1.  It went on to say that male executives outnumbered female executives almost 3 to 1.  I also came across a blog post where the author went onto LinkedIn and searched for women with the word “resilience” in their job title, only to find less than 5 show up on the first 3 pages.



Wednesday, 17 January 2018 15:23


As we begin 2018, cybersecurity issues are already dominating the headlines. With that in mind, here are five industry predictions we see evolving in 2018:

Post Quantum Cybersecurity Discussion Warms up the Boardroom

We’re already seeing the uncertainty of cybersecurity in a post-quantum world percolate in many circles, but this is the year the discussion will gain traction in the top levels of business. We also expect to see the topic of cryptographic agility (or crypto agility) gather more momentum with the heightened urgency to develop standards that drive post quantum cryptography (PQC) and how this impacts business moving forward.

Remember, no algorithm lasts forever. It’s not a matter of “if” it will be broken, it’s a matter of “when. As security experts grapple with preparing for a post-quantum world, top executives and business leaders will begin to genuinely consider what they can do to ensure all our connected “things” (cars, devices, infrastructure, etc.) remain secure. This questioning and testing of their ability to develop and implement an effective crypto agility approach are underpinning the key concerns of companies – irrespective of the industry and infrastructure, whether it’s an enterprise or consumer-related application.

In 2018, we’ll start seeing the discussion shift from questions to solutions. As a result, we expect the first of many customized, market segment (industry, or use case) specific crypto applications will be introduced to bridge the gap and offer the forward-looking ability to adapt to inevitable changing dynamics.



Wednesday, 17 January 2018 15:22

Five Cybersecurity Predictions for 2018

(TNS) - Hawaii leaders are taking heat from the highest level for the colossal blunder that resulted in 38 minutes of terror for residents, who thought that a missile was headed for the islands.

Residents and tourists spent a terrifying Saturday morning thinking that an attack was imminent all because a state employee in a Diamond Head bunker clicked his mouse twice. The mistake shocked many in Hawaii and elsewhere and left them questioning the credibility of the government that they count on to protect them in times of heightened tensions with North Korea.

Now even President Donald Trump is calling for answers. While he praised Hawaii leaders for taking responsibility for the mistake, Trump told reporters who were interviewing him outside his Florida golf club Sunday evening, “We’re going to get involved.”



Data loss is every business’s nightmare. In fact, the majority of companies that do experience a mass disappearance of vital, computer-kept information never turn their lights on again.

About 60 percent of small businesses that lose data shut down within six months, according to a study released in 2017 by Clutch, a Washington, D.C.-based research firm. Another report, by Gartner, shows a sizeable impact on medium-sized companies as well; 51 percent of those that encounter a major data breakdown close down within two years.

Cyber security experts say those stark numbers underscore the importance of being prepared with adequate security measures. Many businesses are not, according to the Clutch study, including 58 percent of small businesses.

“It basically comes down to the idea that how you protect and treat your data is commensurate with how important you think it is,” says Penny Garbus, co-founder of Soaring Eagle Consulting Inc. (www.SoaringEagle.guru) and co-author of Mining New Gold – Managing your Business Data. “You protect your jewelry and money, but you aren’t protecting your data. If you aren’t, you’re putting your entire business at risk.”

Companies both large and small often try to ensure the security of their IT infrastructure by outsourcing to a third-party security vendor. A recent study on cloud security conducted by Forrester Consulting found that nearly 80 percent of participants saw value in outside security expertise. Garbus gives three main ways that managed security services can save a business from the disaster of data loss:

•    Security check-ups. These are essential for cyber security. “The question you must ask yourself is, how much downtime can my business afford,” Garbus says. “One of the best ways to prevent cyber security issues is to have an expert conduct regular health checks on your system. That way if there are any lurking vulnerabilities or potential issues, they can be fixed before causing any damage.”

•    Performance measures. This includes analysis of software, server, cloud and firewall  Business these days operates in the realm of remote servers, cloud computing and unrelenting security threats. “As the technological landscape evolves and data security has become increasingly important, businesses recognize there’s much more to it than handling issues as they arise,” Garbus says.

•    IT development updates. Hackers are becoming more sophisticated every day. For example, ransomware was able to stall private businesses, hospitals, universities, and government agencies. “If you’re handling sensitive data, it’s smart to upgrade the cyber security methods you’ve been using from the beginning of your business,” Garbus says. “Small and medium-size companies aren’t as likely to have a dedicated IT person to oversee the multiple systems, so it behooves them to have a service in place that can keep abreast of changing technology.”

“You might think managed security is mainly for big businesses, but you can certainly make a case that small-to-medium businesses benefit the most,” Garbus says. “In many ways, they have the most to lose.”

About Penny Garbus
Penny Garbus, co-founder of Soaring Eagle Consulting Inc. (www.SoaringEagle.guru), is co-author of Mining New Gold – Managing Your Business Data. She has been working in the data-management field since leaving college when she worked as a data entry clerk for Pitney Bowes Credit. She later ran the training and marketing department of Northern Lights Software. 

The new year started off with a bang, if you consider a “bomb cyclone” or “bombogenesis” a noise-maker. The winter season’s first blizzard, Grayson, was a record-breaker in terms of daily record cold temperatures set all the way from the northeast to the Gulf Coast.

As winter storm warnings continue to pop up across the country, individuals and businesses should brace themselves for the remainder of the winter season – at least seven more weeks. If you haven’t already done so, it might be time to dust off your disaster recovery plans, or at least begin planning for next year.

The biggest risk for companies during winter storms is power outages due to ice, and facility issues due to the cold (water pipes not working). Roads could be treacherous, and air travel is usually impacted. As we learned with Grayson, hurricane force winds are not out of question, either. Human exposure to brutal cold temperatures is also a danger.



Tuesday, 16 January 2018 15:01

Winter Bears Down – Are You Ready?

Data storage backup has evolved considerably over the last two decades. Tape once prevailed with organizations running full backups either every night or each week. Tape cartridge after tape cartridge would be shipped off site for that rainy day they hoped would never come.

Then someone realized that full backups were mostly repeats of data previously backed up. So incrementals became popular – only backup new or changed data. That gave rise to the next logical extension – use deduplication to store one complete set of organizational data on disk (though many also kept additional tape copies offsite).

As time has gone on, many shouted, “tape is dead.” Now some insist backup is dead. They claim it’s a dated technology, which can be replaced by such things as snapshots and replication.



Tuesday, 16 January 2018 15:00

7 Reasons Why Data Backup is Here to Stay

Plunging temperatures, whiteout conditions, and icy roads can turn into a crisis even in the most prepared cities and states.

As a result, this is the season that puts crisis communication to the test. Consider how well your employees and security teams are prepared for communicating internally in the event of a weather emergency.

Plan Activation Strategies

Activation strategies are a crucial component to ensuring proper recovery during and after inclement weather. This strategy will put into action a crisis response team to handle the situation as quickly as possible. Crisis preparation involves a series of procedures that need to be in place ahead of time. This is essential for maintaining internal communications for your workers.

Internal notification software, such as CodeRED from OnSolve, is designed specifically for communications during an emergency situation. By incorporating this government-approved notification solution into your office, internal communications can be handled no matter the situation. Thanks to automated, advanced warnings along with geo-location communication using a variety of delivery modalities, internal notification systems integrate seamlessly into businesses of all sizes.



Monday, 15 January 2018 16:01

Employee Communication in Inclement Weather

OnSolve’s chief product officer, Daniel Graff-Radford, recently interviewed with SDM Magazine to discuss how mobile and integrators are a driving force behind today’s mass notification systems.

Whether choosing to go mobile or become a hard-wired hybrid, here are three ways mass notification systems are changing rapidly.

1. Mobile Integration Success

Mobile integration allows emergency communication to take place across a larger network. Emails, social media, texting and other forms of mobile communication can be achieved all at once using wireless. As a result, you have the potential to communicate with more people in a shorter span of time.

Yet mobile communication is not always perfect, especially in the case of a large-scale emergency or a cyberattack. The best move?  Pursue IP wireless. This gives the organization much-needed control over the network. An organization can structure and prioritize emergency notifications based on the event type and its location This integration provides mobile accessibility with the security associated with analog.



Crisis management, public relations, and business continuity are tested during a disaster event. Today, we’re analyzing business continuity plans and disaster response to determine a good public relations response vs. a bad one.

For today’s post, I thought we might try something new. Rather than write a formal article, I wanted to share some things with you that have been on my mind lately about business continuity and disaster recovery.

I have been observing other organizations’ disaster response efforts from the outside and trying to work out what’s really going on based on what we see in the media, as well as about what separates a good public relations response to a crisis from a bad one. I’ll touch on these and other topics below.



The end game for data center infrastructure management (DCIM) software is that it eventually enables self-managing, or fully autonomic, data centers.

The hope is that AI-driven management software (likely cloud-based) will monitor and control IT and facilities infrastructure, as well as applications, seamlessly and holistically – potentially across multiple sites. Cooling, power, compute, workloads, storage, and networking will flex dynamically to achieve maximum efficiency, productivity, and availability.

Facilities equipment and IT will also be self-healing to some degree by applying cloud-based analytics to sensor data harvested from thousands of sites to guide and enact targeted predictive and preventive maintenance programs. Spare parts will be ordered, tested, and installed (perhaps by dexterous robots) to exactly align with when they are required to avoid failures but also to avoid unnecessary maintenance and testing.  



Self-driving cars may be getting all the attention, but the big impact of artificial intelligence and machine learning in the enterprise is in cybersecurity, and especially in securing data center networks. And given all the threats data centers are facing this year the help is much needed.

According to a recent survey of 400 security professionals by Wakefield Research and Webroot, a cybersecurity vendor, 99 percent of US respondents believe AI overall could improve their organizations’ cybersecurity. And 87 percent report their organizations are already using AI as part of their cybersecurity strategy. In fact, 74 percent of cybersecurity professionals in the US believe that within the next three years their companies will not be able to safeguard digital assets without AI.

AI and machine learning are being used to spot never before seen malware, recognize suspicious user behaviors, and detect anomalous network traffic.



The past few decades have seen a significant increase in society’s level of awareness and investment in personal and workplace safety. In the opinion of those of us at MHA Consulting, similar attention must be given to business continuity.

In this article, we will sketch out the rise over the past few decades of what might be termed “safety culture,” define an envisioned “continuity culture,” and set forth how such a culture can be brought into being at your organization.

The rise in safety consciousness in today’s society can be seen in everything from the creation of the U.S. Occupational Safety and Health Administration in 1971 to the introduction of polarized electrical plugs to the increasing emphasis on people’s wearing seatbelts and bicycle helmets. In the business world in particular, many companies have over recent decades developed a strong emphasis on safety, with consideration for safety permeating everything their employees do.



You’ve probably been there before, at least once. Colorful lights rotating off the ceiling and walls. Music that plays a touch too fast, usually on purpose. Singers who are … enthusiastic in their performance. We’re talking, of course, about singing karaoke.

People love to watch amateurs sing, as demonstrated by the host of amateur singing shows on television, from The Voice to American Idol to The X Factor. The singers’ hearts always seem to be in the right place, even if the notes and words sometimes aren’t. At the end of the day, it’s the effort that counts, right?

You might be wondering what all the silly, at times humiliating goodness of karaoke singing has to do with your network security. It’s simple, really.

Amateur performance is fine when it comes to singing karaoke. When it comes to managing your network security? Not so much. You wouldn’t want that sloppy-but-well-meaning guy in the pub singing on your favorite artist’s new record. So why would you want anything but the very best securing your network, which houses your most precious data and trade secrets?



Thursday, 11 January 2018 15:50

Are You Confident in Your Network Security?

So you’ve locked down your perimeter defenses tightly and implemented comprehensive monitoring and remediation facilities.

All your employees have been trained to spot potential phishing attacks and your email filtering ensures bad actors get dumped unceremoniously into the street, long before their spam and malware gets anywhere near your gleaming infrastructure.

Even your pentesters have started to complain that they’re running out of attack vectors.

Before you decide to relax, there’s something you may have overlooked.



This is part 2 of a 3-part series on digital blueprints. Click here to read part 1. 

Digital transformation has tremendous potential to unleash value for organizations; therefore, organizations in increasing numbers are formulating digital strategies.  However, we find that many are missing significant transformation and value, which are both made possible by holistic enterprise digital strategies.  Many digital strategies are focused too narrowly.  For example, leaders claimed that they are achieving the digital strategy by moving applications and infrastructure to the cloud.  A digital strategy establishes the enterprise vision and priorities for digital transformation.  To power your digital transformation, leverage a digital blueprint – a structured approach to evaluate opportunity areas, value drivers, and risks, and ultimately align the digital path with business drivers.



In 2017, many enterprises came to the realization that the center of data gravity is shifting. Whether it is structured or unstructured, at rest or in transit, enterprise data has moved beyond centralized corporate data centers to the distributed digital edge. The edge is where all the elements giving rise to real-time data generation exist, so it is becoming obvious to organizations to build that into their data strategies.

For enterprises to extract the most value from their data, they must re-think their IT architectures. Pushing workloads closer to the data at the edge helps overcome latency issues that dramatically slow application and analytics performance, creating an unpleasant experience for users. However, architecting for the digital edge comes with important considerations around balancing protection of data with accessibility, and rules governing data movement and placement. One of these critical considerations is the merits and challenges posed by localization of data, which may include the need for compliance with complex personal data protection requirements. The much talked about term this year - data sovereignty -  is all about ensuring that there is clarity around where the data is located and what laws it is subject to, which is a big challenge for the cloud adoption trend facing organizations.

There are various reasons such as data privacy, cyber security, protectionism and economic growth that policymakers cite when pushing for regulation in this area, whether more general or industry specific regulation. Consolidated Audit Trail (CAT) reporting in the U.S. requires companies to log every securities transaction and ensure the accuracy of timing services at the nanosecond level. The Markets in Financial Instruments Directive (MiFID ii) in the European Union imposes new reporting requirements and tests on investment firms.



Thursday, 11 January 2018 15:43

How Data Sovereignty Will Affect IT in 2018

If there was a single, simple action that you could take today that could cut the potential of phishing attacks in half, would you do it?

Great news — taking steps to keep your organization safe from this intrusive type of cyber-attack may be easier than you realized. One-time training for employees to stay vigilant is only the first skirmish in the battle to secure your organization’s digital assets. Ongoing education and reinforcement of the message to be cautious, all presented in a way that employees won’t rebel against, is the first line of defense against spear phishing.

Scope of Damage from Phishing Attacks

The FBI calls them business email compromise scams, but most cybersecurity professionals are more familiar with the term phishing, with spear phishing being the latest way to exclusively target individuals based on their organizational ties or position. With nearly $1.6 billion in losses by U.S. businesses between 2013 and 2016 at organizations of all sizes and segments, spear phishing is costing individual businesses millions of dollars per year. Cyber criminals are targeting real estate, title professionals and attorneys slightly more often, but no business is immune. Any organization in which large sums of money change hands or employees have access to wire transfer information or personal information is in danger.



Last week news broke of two security flaws in computer processors that affect virtually all computers, smartphones and smart devices such as televisions and refrigerators.

The first flaw, nicknamed “Meltdown,” applies specifically to Intel chips. The second flaw called “Spectre,” is more difficult for an attacker to exploit but has no available patches yet and lets attackers access the memory of devices running Intel, AMD, and ARM chips.



Conventional wisdom is that you are fine if your data gets infected or your data storage systems get shutdown by ransomware – if you have a current backup that is complete and uncorrupted. All you need to do is reset your systems, reinstall the apps and restore the data.

Unfortunately, that may no longer apply.

"Backing up your data no longer provides an absolute guarantee that you can recover from a ransomware attack,” said Jerome Wendt, an analyst at storage consultancy DCIG.



You can’t wait until disaster strikes to create an emergency communications strategy, so make it a priority in the new year to determine what will work to keep your community safe.

Whether you’re facing an active shooter situation or a simple weather emergency, detailing your communications plan in advance allows your team to spring into action and notify others — keeping your community safe and allowing them to feel protected during times of crisis. Without a strategy in place, your team may struggle to respond to incidents which can result in additional chaos and confusion. It’s important that your plan is not only detailed, but highly flexible, so you’re able to adjust to changing situation requirements on the fly.

Discuss What Worked and What Didn’t

Taking the time with your communication team to discuss what worked well throughout the year and what didn’t is the first step in updating or creating your strategy. If you were able to successfully reach your community — that’s great! You’re a step ahead, and well on your way to communications success. Would the plan that you created and put into action work well for other types of emergencies? It may help to brainstorm some ideas and how the plan you have could be modified for different events such as widespread power outages in the winter or an active shooter alert.



A recent survey by Radware found that nearly half (45 percent) of respondents had experienced a data breach in the last year, and 68 percent are not confident they can keep corporate information safe. Despite costly and constant breaches, and upcoming data privacy movements such as General Data Protection Regulation (GDPR), companies continue to leave data vulnerable due to outdated and ineffective security policies and processes.

This current state of data insecurity has arisen from the need for organizations to maximize the value of their data by broadly sharing information inside and outside the organization. The problem is that most organizations are largely focused on network security, protecting and hardening the perimeter. But often the data in the “squishy” middle is left vulnerable to threats. If attackers can get in, they’re in. And, most databases — where the critical corporate data resides — have all-or-none access, which is not sufficient to shield against growing cybersecurity threats. 

Here are three steps companies should take to avoid becoming the next newsworthy data disaster. 



If it is your responsibility to control, facilitate or join a team of Crisis Management professionals, you are likely to hit some sizable hurdles should this team not have a strong foundation already in place. When dealing with a large organization with employees from all backgrounds, you will quickly understand, that few people will be as committed to crisis management as you are. Therefore, getting a strong foundation built in the early stages is key.

PreparedEx has spent many years combining experience and knowledge on how each individual Crisis Management Team can build and maintain strong commitment and culture through the whole organization.

If there is one lesson we’ve seen repeatedly, it’s that quick wins will not effectively keep your organization prepared for an event. Even before your Crisis Management Plan is created, build a foundation that stands the test of time.



Public crises have become increasingly common around the world. Of course, managing such emergencies is not always easy. For this reason, public administrators have established ways of managing public expectations while helping those affected at the same time. Thanks to technology and increased access to the Internet, communicating with the public has never been easier. Read on for more on this topic.

To learn more, checkout the infographic below created by Norwich University’s Online Masters in Public Administration.


Online Masters in Public Administration Program

We’ve all seen the news reports, photos, and tragic stories of towns and businesses impacted by natural disasters. Business professionals who are forced to deal with the aftermath of a natural disaster may experience a range of emotions from relief that it’s over and that they had a disaster recovery plan in place to regret that their disaster recovery plan was inadequate or incomplete, or to despair that they never got around to developing a disaster recovery plan at all.

Disasters and how we respond to them are never one-and-done. In the real world, disaster planning for the nextdisruption begins immediately after going through an actual disaster event. This means that the weeks and months immediately following a disaster are the most crucial for evaluating and improving your disaster recovery plan. Aside from the site recovery itself, which may be considerable, it is essential to address deficiencies in your plan as soon as possible and practical. For critical communications, these could include data transmission, materials being redirected, or updates to design that were never shared with the disaster recovery provider.



Munich Re has released its 2017 catastrophe review, and disaster related insured losses for the year are the highest on record at $135 billion.

The record losses are driven by the costliest hurricane season ever in the United States and widespread flooding in South Asia. Overall losses, including uninsured damage, came to $330 billion.

The United States made up about 50 percent of global insured losses in 2017, compared with just over 30 percent on average. Hurricane Harvey, which made landfall in Texas in August, was the costliest natural disaster of 2017, causing losses of $85 billion. Together with Hurricanes Irma and Maria, the 2017 hurricane season caused the most damage ever, with losses reaching $215 billion.



(TNS) - The number of deaths associated with this year’s severe flu season has quadrupled in a week, according to the latest update from the county Health and Human Services Agency.

An additional 34 deaths were added to the tally Wednesday, including a one-year-old boy, as influenza raged throughout the region in what experts say is the fiercest battle with the rapidly-mutating virus they’ve experienced since 2009, when a pandemic filled emergency rooms from Oceanside to Chula Vista.

That has been the case this year, as well. A feverish mob began arriving at local emergency rooms right around the holidays, creating long waits and forcing some facilities to set up tents in their parking lots to relieve pressure on their emergency waiting rooms — just as they did in 2009 when the H1N1 epidemic hit.



Is your business part of the 48% that lack a BC plan and still regards itself as ready for trouble? If so, it might be time to start a BCM program.

A recent study found that 48 percent of small businesses are operating without any type of business continuity plan, yet 95 percent of the businesses indicated they felt they were prepared for any disasters that might strike.

Is your business part of that 48 percent that lacks a BC plan and yet still regards itself as ready for trouble? If so, perhaps you think your insurance will cover you if something goes wrong, or that your evacuation plan will help you out. Or maybe you have an old dust-ridden binder lying around that is labeled “Business Continuity Binder,” but which you haven’t looked at in ages. If either of these things is true of your company, chances are that you are not truly prepared for disaster. From Hurricane Maria and the shooting in Las Vegas to the current fires in California, history shows us that companies that do not proactively consider how to respond to events are among the last to get back to business.

So, why do people and companies neglect to implement business continuity management (BCM) in their organizations even though they know it’s the right thing to do and can ensure the survival of their business? That’s a difficult question to answer, I think because it has very little to do with business continuity management and a lot to do with human nature.

People and companies are inherently motivated to do what’s good for them. The problem is that accurately perceiving “what’s good for them” is not nearly as easy as it sounds—and even if a company can figure this out, they may not believe that it is possible for them to do it.




Thursday, 04 January 2018 15:56

Do the Right Thing: Start a BCM Program

As we all stride into the New Year with new goals, and business aspirations, many end customers will be doing the same. Business from small to large will invest further in the cloud, and those who’ve yet to make the move to cloud will likely look to make a change in 2018. Now’s the time to fine tune your cloud strategy and win more business.

In 2018 analysts see the cloud as a key driver for business transformation. Forrester predicts that more than 50% of global enterprises will rely on at least one public cloud platform to drive digital transformation. “Cloud is truly business critical and is now a mainstream enterprise core technology,” writes Dave Bartolleti, vice president and principal analyst at Forrester. See blog post here.

Further, IDC sees boom times for anything that resembles a cloud. The research firm predicts that by 2021, cloud services and cloud-enabling hardware, software and services spend will double to more than $530 billion.



(TNS) — NAPA, Calif. — The Sonoma ridgeline was a sunrise of flame as Sgt. Brandon Cutting led deputies up country roads to pound on doors, hollering “Sheriff’s Office!”

Thirty minutes later, with Cutting huffing from exertion and choking in thick smoke, the evacuation of Redwood Hill was still playing out one door at a time. He followed the sound of shouts to an officer struggling to carry a disabled woman. Her house was on fire. Her shoe on the ground. The night around them was orange in every direction.

It was 11 on a Sunday night, the beginning of what would be the most destructive fire siege in California history. Frantic rescues were taking place across wine country as heavy winds ripped down power lines and the dry hills lit up in flames. Modern technology in the form of robocalls and digital alerts would not join the fight to roust sleeping residents for another half an hour.



Replacing the choice to opt-in with the choice to opt-out has proven to be one of the most successful policies to come out of applied behavioral economics. For example, in France citizens are automatically enrolled in the organ donor registry unless they choose to “opt-out.” Only 150,000 people, out of France’s approximately 66 million, have opted out of the program.

A recent McKinsey report suggests that making flood an insured risk on standard homeowners policies in high-risk states and giving homeowners the option to opt-out could generate as much as $50 billion annually in untapped revenue.

Policyholders could decide to opt out of flood insurance, but experience from several markets (terrorism insurance, voluntary retirement contribution, etc.) show many will not.



Thursday, 04 January 2018 15:45


PCI, HIPAA, SOX, GLBA. The alphabet soup of government regulations and compliance standards is enough to give any CIO a migraine. But just when you thought it was safe to come out of the regulatory waters, the General Data Protection Regulation (GDPR) is right around the corner. Haven’t heard of GDPR? You soon will—and you’d better pay attention.

Previous cybersecurity regulations such as Safe Harbor, which was overturned by court orders, and the EU-U.S. Privacy Shield left room for improvement. The EU then created GDPR to add teeth to European regulations for how organizations handle security. Essentially, the EU is augmenting regulations to ensure that all organizations protect the data subjects—the people—from companies conducting abusive personal data processing.



So 2017 is in the rear-view mirror, and here comes 2018, all bright-eyed and bushy-tailed. What should we be ready for this year in terms of risk management trends? Here are three that are likely to have an outsized impact:

  • Cyber security risks will continue and get more dangerous. Maintaining information and network security will grow even more challenging.
  • The cloud will bring risk. The increased dependence on cloud-based services is creating a new kind of risk that many companies have yet to address. 2018 is likely to see a deepening engagement with the vulnerabilities caused by this new reality.
  • New rules will bring unexpected risks. As companies adapt to the new regulatory regimes, the changes they are obliged to make will create unexpected new dangers.

The traditional threats to business operations from nature, people, and technology are still out there and will doubtless rear up and make themselves felt in 2018. These will include bad weather, employee mistakes, and so on. But in terms of new risk management trends, the three developments mentioned above are likely to be especially prominent. We’ll take a closer look at each one below.



Wednesday, 03 January 2018 15:07

3 New Risk Management Trends for 2018

Why Cybersecurity Equals Job Security for CEOs, CFOs and Others

How much do you worry about being hacked? How much should you worry? In 2017, cyber-attacks doubled from 2016 levels—and the insider phrase now is “It’s not if, but when, you’ll be attacked.” In fact, organizations are silently and invisibly breached every day, with the private and confidential information of consumers, customers or corporations harvested leisurely by terrorists and criminals.

Obviously, this has become an issue that CEOs and other leaders need to take much more seriously than in the past. Amazingly, these breaches do not result from decisions made at the brain-surgeon level. Most cybersecurity (cyber) hacks are enabled by simple errors or laxity in areas like basic software and IT hygiene. This includes the failure to provide minimal security awareness training.

Why does this happen? Well … who likes changing passwords, or waiting for that security-code text or phone call, especially when you have ever-growing workloads and tight deadlines? And “the Suits” exempt themselves from the basic common-sense procedures which everyone else must endure. Hacks become inevitable.



Wednesday, 03 January 2018 15:06

How Safe Is Your Business?

(TNS) — The Lake Oroville spillway crisis and evacuation last February might have only lasted a few days for Yuba-Sutter residents, but the ordeal left many with unanswered questions and a newfound fear of the unknowns of living downstream from an aging water storage facility and system.

Questions about who is to blame for the spillway's failure, how it happened and what can be done to prevent it from happening again continue to resonate with local residents close to a year after the event occurred.

The Appeal-Democrat reached out to community members and officials about the incident to gauge how they were impacted by the event, what the most significant takeaway was for them and what they would like to see changed moving forward.

Their responses varied, but all seemed to agree that there are positives that can be taken from the Lake Oroville spillway incident and the events that followed.



Wednesday, 03 January 2018 15:05

Impacts, Lessons from Oroville Spillway Crisis

Once upon a time, storage was storage and analytics lived somewhere else – far removed from the storage universe. But the world has changed and the advent of big data has brought the two close together, as shown by the Veritas 2017 Data Genomics Survey.

By analyzing 31 billion files globally, it found that data repositories have grown by nearly 50 percent annually. This is largely driven by the proliferation of new apps and emerging technologies such as Artificial Intelligence (AI) and the Internet of Things (IoT) that leverage massive data sets. If data continues to grow at this rate and companies do not find more efficient ways to store and manage their data, organizations worldwide may soon be confronted with storage expenses topping billions of dollars.

Hence many organizations are looking to big data and analytics applications to lessen their woes. Storage vendors are also arriving at the party. Instead of cobbling on big data and analytics tools from other vendors, some are adding such functionality within their storage platforms. There are many examples to choose from. Here is a sampling.



Fog computing, also known as edge computing, has begun to take off. Fog computing is an intermediate layer that extends the cloud layer. As Giti JavidiEhsan Sheybani, and Lila Rajabionwrite in Focusing in on Fog Computing’s Implications for Business, “simply put, fog is a cloud close to the ground.” With fog architecture, some of the computing is moved to the edges, away from centralized data centers and cloud solutions, allowing data to be processed locally on smart devices rather than being sent to the cloud for processing.

Why are companies are turning to fog computing? According to Javidi and her coauthors, they are doing it for higher efficiency, better security, faster decision-making processes, and lowered operating costs. Internet of Things devices, in particular, can put a huge strain on the internet infrastructure. In their report, Javidi et al describe a case of fog computing that reduces that stress: A jet engine can create about 10 TB of performance and condition data in 30 minutes. Transmitting that data to the cloud and getting the response data back takes a lot of time and bandwidth, which introduces latency. Using a fog environment, the processing can take place on a local router, resulting in data that can be acted on in just milliseconds, in addition to sending data on to the cloud for historical analysis and longer-term storage.



Wednesday, 03 January 2018 15:02

Fog or Cloud? Nope. Fog AND Cloud

An Effective Risk Management Tool for Your Business

Corporate data breaches are on the rise and many businesses are rethinking their security strategies and plans for risk management. This piece discusses how these repercussions are causing business leaders to implement more holistic approaches to security that include preventative measures in addition to attack response plans that includes cybersecurity insurance.

With corporate data breaches on the rise, many businesses are rethinking their security strategies and plans for risk management. Hacks, breaches and outages are proving to be more than just technical issues. Instead, they’re leading to larger potential fines in countries within the EU, loss of customers and not just potential negative reputation for affected companies, but the potential for actual physical harm caused to employees or customers in certain verticals.

These repercussions are causing business leaders to implement more holistic approaches to security that include preventative measures in addition to attack response plans. Preventative measures serve to better network defenses and implement best security practices. Response plans should include cybersecurity insurance, a coverage plan designed to be implemented when an attack occurs.



With the holiday season coming to a close and the new year upon us, it is time to bring focus back to planning for the year ahead.

As we saw in 2017, no agency or community is immune to both natural and manmade threats to people and property.

The following list of tips provides guidance for emergency officials to use to make improvements to critical notification planning in 2018:

  • Emergency officials who are improving critical notification planning need to conduct an updated risk assessment. This risk assessment must focus on possible emergencies and threats that could impact your area.
  • Create a critical notification plan for each of the potential scenarios. It should include the most optimal modes of communication for each threatening situation. For example, communicating with the public about an active shooter threat in a school will differ from communications due to a forest fire encroaching on a community.
  • Download the white paper: Communicating in Crisis – How Preparedness Leads to Successful Crisis Management to utilize the free planning template and document important response processes.
  • Identify at-risk populations and how to communicate with them in an emergency scenario. This should include workers within the agency, as well as the general public in their homes or travelers to the area. It should also include businesses affected in the area.
  • Once the plans are put into place, practice drills should be implemented immediately.
  • As the emergency officials in charge of community safety, you need to evaluate any existing practice drills to ensure the methods of communication are being communicated effectively. For example, you may send out test communications messages and ask for responses, create focus groups within the target population, or create surveys to send out to community members. These tests and surveys should help identify any communication gaps in the critical notification plan.
  • Ongoing training for emergency officials needs to be scheduled throughout the entire year to keep teams up-to-date and tweak where necessary before a crisis were to occur.
  • Having emergency supplies in position prior to an incident is essential in carrying out any rescue mission.
  • Your organization is encouraged to get the help of the public via communications, both in spotting signs of an emergency and providing ongoing communication.



Each new year is a time for new beginnings – considering what you’ve done for the previous year and what needs to change in the next.

If you haven’t used your emergency communication strategy in nine months or more, it’s time for a refresh. Teams and reporting structures can change, and new people are in positions that haven’t been trained. An annual checkup allows you to take advantage of any new functionality that has been introduced and remind the organization how important it is to stay in touch and informed in the event of an emergency. Even if your personal New Year’s resolutions tend to fall by the wayside, this is an annual checkup that you’ll want to make time for!



We’re helping you streamline your BIA, cutting it down from 8 to 4 hours or less for each business unit and maximizing your BIA interview.

In last week’s post, I mentioned that we veteran BCM professionals will need to learn some new tricks in order to work effectively with members of the millennial generation. Specifically, I gave the example of the traditional business impact analysis (BIA) meeting as something that probably won’t work very well with colleagues and clients who are used to speedy and informal ways of doing things.

After finishing that post, it occurred to me that I had left readers hanging in terms of how exactly they might go about making their BIA meetings more efficient. In today’s article, I’m going to make that up to you by giving you my 5 Tips for Making the Most of the BIA Interview. These tips aren’t only relevant when working with millennials. These days, pressure for BIA professionals to be more efficient comes from across their organizations.

In days gone by, the BIA process would take anywhere from 6 to 8 hours for each business unit, from the pre-work and interview through the follow-up phase to the final approval of results. In today’s world, the entire BIA process better take from 3.5 to 4 hours or less for each business unit, from pre-work to final approval.

Needless to say, even as you are speeding things up, you’re still expected to cover all the important bases, doing as good or better a job as you did before. Nobody said it was going to be easy! But hopefully the tips below will make it easier for you.

And in the spirit of the subject, I’m going to try to keep things short and sweet.



When a disaster strikes your place of business, you don’t have much time to act. The phrase “time is money” is certainly applicable here – ITIC’s latest survey data finds that 98% of organizations say a single hour of downtime costs over $100,000.

One way or another, your organization must get back up and running. With the rise of cloud-based applications, there’s been an increased level of “workplace recovery from home” scenarios.

After all, why would you want to spend money on a workplace recovery solution if you can connect to your applications from home? While the concept looks good on paper, the reality is that there are several drawbacks.



Tuesday, 02 January 2018 20:58

Why Workplace Recovery Is Critical

In our 2017 recap, we’re paying particular attention to SaaS, IaaS, and DRasS—and what business continuity and the cloud means for you.

From a business continuity and disaster recovery point of view, 2017 might go down as the year of natural disasters, but there are some other issues which will probably have longer-term impacts on business continuity.

There were many high-profile disasters, natural and otherwise, but it was our continued shift toward cloud-based computing solutions that had the most significant impact on the world of business continuity and disaster recovery.

In this post, we’ll look back over the year just past, paying particular attention to the explosion in business use of three types of cloud solutions—software as a service (SaaS), infrastructure as a service (IaaS), and disaster recovery as a service (DRasS) —and on the impact of this change on the practice of business continuity.



Protect your residents and your staff in 2018

The new year is the perfect time to evaluate your emergency communication plan. Find out how emergency communications methods, such as public warnings, internal messaging, notification systems, and mobile apps, can protect your organization.

Necessity of Emergency Communications

During an emergency scenario, it is too late to coordinate the logistics of communications. This must be both established and practiced long before the need arises. Take the Super Bowl in 2014 against the Seattle Seahawks and the Denver Broncos. Prior to the pigskin extravaganza, six envelopes of white powder were received at hosting hotels near the football game.

Emergency officials across the board, from the local fire departments to the Bergen County Hazmat and the FBI’s Joint Terrorism Task Force, dedicated full forces to the situation. The problem was as every organization was working on the faux anthrax case, there was a major lack of emergency resources available to securely coordinate the half a million fans entering the stadium on Super Bowl Sunday. All of this was due to a lack of communication among government agencies on how to handle this emergency situation.



Feeling Invincible is Ignorance to Reality

The Financial Industry Regulatory Authority is under the radar for many organizations. Being fully aware of the violations and their consequences can prevent unneeded issues. 

“It won’t touch us.” Such is the position of many organizations when it comes to the Financial Industry Regulatory Authority (FINRA). Rather than keeping FINRA firmly on their radar screen, these organizations downplay its power and their own vulnerability, believing themselves immune to the potential for violating FINRA rules and facing the consequences. However, gambling on this immunity is dangerous at worst and foolhardy at best, for several reasons.



Tuesday, 02 January 2018 20:52

Why You Need To Worry About FINRA

As 2017 winds down, I thought it might be worthwhile to knock back a cup of Earl Grey and see what the tea leaves show lies ahead for the world of business continuity in 2018.

Here’s the thumbnail version of my forecast:

  • The overall picture of most BC programs is going to be one of ongoing uncertainty, with lots of small-scale agitations but no dominating trends.
  • Two peripheral trends I see are the continued movement of services to the cloud and the increasing influence of millennials on the world of business continuity.
  • In the world at large, I think we’re going to see the continuing proliferation of the risks associated with climate change and terrorism and also the potential movement of international conflict into cyberspace. These developments could have significant impacts on the practice of business continuity management.

One thing that we know is coming in 2018 is the European Union’s new General Data Protection Regulations (GDPR). The EU’s strict new privacy protection rules go into enforcement on May 28, as I discussed in my blog post from last week, GDPR Compliance: A Heads-Up for Business Continuity Professionals. Take a look if you would like to know more about what GDPR might mean for your organization.

And while I’m making suggestions for further reading, let me call your attention to an interesting survey by Continuity Central. These are the interim results of their survey of business continuity professionals worldwide, asking people what they see happening in their programs in 2018. The results of this survey triggered some of the points I make below.

In the rest of this article, I’ll share some additional thoughts on the topics mentioned above, along with a few others.



If you’re new to Business Continuity, you have a lot to learn.  A thorough understanding of Risk – and how to assess Risk – need not be on your To Do list.

As a BCM professional, you already know how much time you spend on Risk Assessments.  Have you ever considered how little value a BCM-centric Risk Assessment provides?

Most large organizations have a Risk Management Departments.  They catalogue, monitor and manage risks.  If that’s already their fulltime job, why are you conducting Risk Assessments?  Even in a smaller company, is a BCM practitioner really most qualified to be conducting Risk Assessments?

Risk Management is more a science than a project.  Risk Managers spend their time focusing on risks. Most small businesses without dedicated Risk Management departments understand their organization’s risks – even if they don’t act on them.  Those who set a business’ strategic direction consider risk in those plans (regardless of whether they do so consciously).



At Forrester, we have developed an assessment to help organizations understand their continuous deployment maturity. The assessment should take 10 minutes or less to complete with the outcome identifying where you are in your continuous deployment journey. DevOps teams should focus and build four critical competencies including: process, structure, measurement, and technology. Your honest assessment of these competencies will help identify key areas of improvement and help get everyone in the organization on the same page. Additionally doing such a assessment might just avoid the disconnects between leadership and DevOps teams identified in my last blog –  Executives Overestimate DevOps Maturity.

DevOps is predicated on teams driving inclusive behaviors such as collaboration and leveraging feedback loops, destructing silos of functional excellence, with empowered product teams who are delivering business outcomes. To support this, we identified four competencies that enable continuous deployment:



Organizations Must Invest in Professionals Needed to Ensure Successful Digital Transformation

As technology becomes an increasingly essential driver of business success for enterprises around the world, determining how to effectively and securely pursue digital transformation is of critical importance. Organizations must strike the appropriate balance of proactively seeking to deploy new and emerging technologies while still taking care to address the new risks and threats that these technologies may introduce.

Organizations across the world are understandably eager to capitalize on new technologies capable of enabling the digital transformation needed to thrive in today’s global economy. While embracing innovation and investing accordingly is an admirable approach, enterprise leaders must adopt a holistic mindset before calculating the best way for their organizations to transform.

As a starting point, organizations pursuing digital transformation should evaluate their business data and their customer data, and make sure they have the right talent in their organizations to securely leverage the resulting opportunities. That includes highly skilled and well-trained governance, risk management, audit and cybersecurity professionals. Small and medium enterprises (SMEs) can scale up their basic security services by going to managed security services, while large enterprises can continue to build talent and enhance their technology governance.



Thursday, 21 December 2017 16:04

New Technology Requires New Hires

Before big data and fast data, the challenge of data movement was simple: move fields from fairly static databases to an appropriate home in a data warehouse, or move data between databases and apps in a standardized fashion. The process resembled a factory assembly line.

In contrast, the emerging world is many-to-many, with streaming, batch or micro-batched data coming from numerous sources and being consumed by multiple applications. Big data processing operations are more like a city traffic grid — a network of shared resources — than the linear path taken by traditional data. In addition, the sources and applications are controlled by separate parties, perhaps even third parties. So when the schema or semantics inevitably change — something known as data drift — it can wreak havoc with downstream analysis.

Because modern data is so dynamic, dealing with data in motion is not just a design-time problem for developers, but also a run-time problem requiring an operational perspective that must be managed day to day and evolve over time. In this new world, organizations must architect for change and continually monitor and tune the performance of their data movement system.



While consumers and many businesses are already enjoying the fruits of digital transformation, the insurance sector can hardly bear it. In many cases, legacy systems leave those in the industry with outdated infrastructures and processes.  Looking to get out of the woods, insurance institutions know now is the time to tame the technology beast and make a digital transformation to:

  • Keep pace with customer expectations who have already embraced next-generation IT innovations
  • Meet rigorous compliance regulations regarding data security
  • Simplify and enhance employee jobs with the latest digital tools and online collaborative business applications



The internet is like a big city with lots of amazing sights and many useful services—but also many shady areas and lurking predators. And the predators don’t necessarily stick to the bad parts of town: sometimes they come out to pick pockets on the nicest boulevards.

So far, our Corporate Security Awareness series has looked at how business continuity professionals can help their co-workers (and their organizations) stay safe when using non-workplace Wi-Fi networks, personal devices they may use for work, and email.

In today’s post, the fourth and final one of the series, we are going to talk about how BC managers can promote safer internet use and web browsing at their organizations.

Business continuity managers can and should play a role in advocating for safer policies in all of these areas even though direct responsibility for configuring technology, establishing policies, and training users in order to minimize the risks to the above areas lies outside the BC department. By raising the matter of internet security and safety with their partners in IT security and other departments, BC managers can raise awareness and promote the adoption of safer policies. As BC professionals we need to be just as concerned with the prevention of outages and issues as in responding to them.



A quick glance at the accelerating technological change over recent years, and the subsequent upheavals, could be enough to make us all fear for the future of the global economy. However, there are good reasons to be hopeful: the rapid changes in today’s interconnected world call for a renewed interest in international standards, making them more important than ever.

Change is nothing new. Nobel laureate Bob Dylan sang that “the times they are a-changin’…” back in 1964. The difference today is the pace of change. In his book, Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations, Thomas Friedman sees the world at a turning point. He believes that technology, globalization and climate change are reshaping our institutions – and rapidly. As his subtitle notes, this is an “age of accelerations” and we all need to keep up or risk getting left behind.

Given Friedman’s thinking on “accelerations” in technology and the disruptions it can cause, it is tempting to consider the impact on the “institution of standardization”. First, what is the rightful place of international standards in today’s global economy? Second, does cross-organization collaboration offer any clues about the nature and impact of world trade?



Thursday, 21 December 2017 15:57

Why the future belongs to standards

Twinkling lights, whiffs of peppermint, and holiday tunes aired everywhere you go — these are just a few of the signs that the holidays are upon us.

As an emergency manager, you must also account for less savory signs, such as increased traffic, an influx of travelers, and unexpected wintry weather. One way to provide your community with a calm and festive holiday season, no matter what emergencies come your way, is via the CodeRED Mobile Alert app.



Central bankers around the world have set up or are creating departments to embrace big data in the quest for deeper insight into the economies they manage.

"Isaac Asimov once said, ‘I do not fear computers. I fear the lack of them,’" David Hardoon, chief data officer at the Monetary Authority of Singapore, said in a recent speech. "We are now starting to put in place the necessary tools, infrastructure and skillsets to harness the power of data science to unlock insights, sharpen surveillance of risks, enhance regulatory compliance and transform the way we do work."

Authorities like Hardoon are tapping publicly-available sources such as Google Trends and jobs websites to help "nowcast" their economies, and confidential data like credit registers that can help identify a stressed bank. Collection of micro data increased after the financial crisis, when policy makers realized they lacked the depth of information to make appropriate decisions.



(TNS) - As federal investigators piece together what caused an Amtrak train to derail near Olympia, killing several passengers and injuring dozens more, we’ve compiled what we know so far.

Our reporting began early Monday, with reporters and photographers stationed across the Pierce County area, hearing stories from survivors and gathering details on the crash.

This is a rundown of our notes:



At this very moment, 1,800 thunderstorms are occurring around the world. Within each one, multiple threats are lurking. Some of these threats may remain unnoticed until the moment they strike - damaging homes, destroying property and claiming the lives of those in their path. Severe weather affects everyone. However, with modern weather intelligence technology like advanced storm tracking, it is possible to be more prepared for notable weather events, even ones that seemingly emerge from nowhere.

Advanced storm tracking technology analyzes complicated weather behavior and present it in an easy-to-understand format to users. The aim of this technology is to deliver life-saving weather intelligence to people everywhere during dangerous weather situations. A number of unique attributes make this technology more advanced, and therefore more effective than standard weather forecasting and tracking products.

201702 Threat Net Datasheet1

Baron (a leader in critical weather intelligence) has been a pioneer in severe weather detection and storm tracking for over two decades. Baron original storm tracking technology, developed in the late 1990’s, was a simple drag and drop function based on the storm parameters known by the operator. Automated tracking soon followed along with a new angle for automatically identifying the most severe location in a storm. Rather than tracking the storm’s center, Baron advanced algorithms identify and follow specific threats throughout a storm’s expanse. Not exclusively tracking the storm’s center enables the technology to calculate precise threat arrival times, rank potential tornado probability, and alert people in harm’s way. There are also seven other key attributes that make Baron storm tracking so advanced.

1. Accessibility

Any industry or individual with an immediate need for weather awareness can utilize and benefit from advanced storm tracking technology like that of Baron. No in-depth knowledge of weather forecasting or algorithms is required. Baron algorithms, for example, remove the need to analyze complex information on the user side.
All data for advanced storm tracking comes pre-analyzed and interpreted, so users of the technology can have the situational awareness they need to make tough decisions when lives and assets are on the line. Farmers can protect their crops, pilots can stay up-to-date on potentially hazardous flight conditions, and public safety officials can make sure the communities they serve have more time to act when weather is imminent, all without having to worry about the advanced science and math behind the technology.

2. Time and place specific

Knowing not only how, but also when a community will face devastating weather can make a measurable difference in preventing damage. For example, in Baron technology, each individual storm track contains data that precisely determines which areas will be affected by a threat, including a list of estimated arrival times. The technology allows users to predict, down to a neighborhood level, when a storm will make its biggest impact.

3. Threat-specific tracking

StormCellIDandTrackingWithin a single storm, multiple threats may require immediate attention and necessitate advanced tracking techniques. Advanced storm tracking technologies often concentrate on identifying specific dangers—hail, high winds, flooding and potential tornadoes—and then determine their locations and magnitudes. Baron Storm Tracking, in fact, pinpoints all individual threats at once, and then tracks them up to one hour into the future. Other storm tracking methods may focus on following the middle of the storm. This method doesn’t yield the best results because the center of the storm could be less dangerous, while the more serious threats may make their way into communities without proper warning.

4. Tornadic potential

201502 BTI index 1Potential tornadoes can be identified sooner with some of the advanced storm tracking technology’s severe weather algorithms. For example, the Baron Tornado Index (BTI) fuses together real-time data from radars and atmospheric conditions present in and ahead of a storm to generate the likelihood of tornadic activity. Results are updated in real-time and presented on an easy-to-read scale of 1-10—the higher the value, the greater the probability. Additionally, Baron algorithms monitor and track rotating winds in the atmosphere along with other parameters to mark the location where tornadic development is most likely to occur.

5. Usability

In many cases, weather tracking calls for the evaluation of several data products at once to generate a comprehensive picture of a storm. This kind of procedure demands more attention to multiple things than many people can give while remaining lucid and aware of their situation. To rectify this issue, advanced storm tracking technologies do much of this work and evaluation ahead of time. For example, Baron automatically complete much of the detailed analysis so users can focus on what matters most—staying alert of the greatest dangers and then communicating that information to all relevant parties. Every data point and visual cue in Baron Storm Tracks is self-explanatory, and locations of hail, high wind shear, and more are pre-interpreted. This kind of technology provides more insight into difficult storms faster, giving users the confidence they need to make mission-critical decisions.

6. Continual analysis

Data analysis in advanced storm tracking happens in real-time and information is updated continuously. By sampling lower-elevation radar scans and gathering information before the entire scan is complete, accurate and actionable intelligence can be relayed back to the user faster than with other methods. The technology quickly identifies embedded dangers within a storm that can be hard to diagnose and gives frequent updates on its speed, path, and arrival time. It provides the most up-to-date information sooner, giving those in the path of a storm more time to act. Building on these technologies, Baron continues to refine their storm tracking solutions with newer more timely capabilities just released this year from new Baron intelligent processing that delivers faster detection and more accurate location of the critical part of the storm.

7. Site-specific alerts

weather alert tornadoMuch of the advanced storm tracking technology around today existed almost 10 years before iPhones were introduced. Now that smart phones are ubiquitous, this advanced storm tracking can deliver site-specific, life-saving alerts to warn subscribers in threatened areas. The Baron system, for example, determines speed, wind direction, shear, and more, and then using this collected data, automatically deciphers areas of a storm requiring advanced notifications and alerts everyone in harm’s way. Every geo-specific alert is targeted, so users of the Baron app will only receive a push notification if they are within the specified threat range.

Critical Weather Intelligence for everyone.

For decades, companies like Baron have been redefining storm tracking technology, taking it to new levels of precision. They have made it their mission to ensure the safety and livelihood of everyone with a need for severe weather intelligence, and continue to build upon their technology to ensure everyone has access to the critical weather intelligence they need to help their decision making.

By Gabe Gambill, VP of Product & Technical Operations at Quorum

When it comes to an effective disaster recovery strategy, your team has several options. You can maintain your own DR site in a remote location, handle it on-site or go with a DRaaS solution. Then there’s colocation – where you migrate your DR to a provider’s data center, installing your own servers, network and data storage there. 

While most teams have heard of colocation, some aren’t sure how it differs from other kinds of disaster recovery or if it’s right for them. So let’s talk about the benefits of colocation and the criteria to follow when choosing a colocation facility.

The Benefits of Going Colo


One benefit when compared to DRaaS is that the control stays in your hands. When you outsource disaster recovery completely, it can take some weight off your shoulders – but you also hand over a certain amount of control and visibility. Colocation gives it back to you. True, your data center is owned by someone else, but you control the hardware and software and greater day-to-day visibility.


Going with your provider’s data center can offer more robust power capacity and stronger network performance. If your bandwidth requirements increase, you may be able to take advantage of volume pricing while skipping multiple contracts and SLAs.

Cost Savings

Colocation facilities tend to charge by space, which means your price tag ultimately comes down to the kind of equipment and number of servers you’ll install. However, you won’t be paying the actual costs of owning and maintaining your own data center. Compare your potential price tag for power, cooling, HVAC units and backup generators to the facility charge; chances are you’ll save money.


Not all colo providers offer support, but if they do, having on-site expertise can spare your team from time-consuming server and equipment maintenance. The provider’s team may also have advanced skills to facilitate a smoother disaster recovery, giving you better peace of mind and freeing up your team to focus on other initiatives.

Selecting a Colocation Facility

Not all data centers are created equal. One critical component is location. If and when disaster hits, can you get there in a hurry? What if something happens to your primary site and your recovery depends solely on your colo site? Make sure you choose a facility within reasonable proximity and not two thousand miles away.

You’ll also want to think about security. Verify the facility has all the same security checks you’d install for your own data center:

  • Are the generators accessible? How close together are they?
  • Is the data center protected against fire and flood and other natural disasters? Is it tier 1, enterprise-grade and certified?
  • Does it meet your compliance needs?
  • Is there video monitoring and 24-hour camera surveillance?
  • What kind of access controls are in place? Is there biometric and card key entry, are there cabinet and cage locks?

One final consideration: think of partnering colocation with the cloud. In addition to hosting your data backups in an offsite facility, you can still take advantage of those speedy cloud failovers, spinning up a virtualized clone of your environment whenever you need it. It could be the right form of DR insurance for you, knowing you’re protected locally and in the cloud if something takes down your primary site. Keeping your servers and applications operational is the whole point of DR, after all, and colocation can be the perfect solution.

Tuesday, 19 December 2017 18:52

Should You Go Colo?

The end of the year is almost upon us, which can mean only one thing: cold and often unpredictable winter weather is about to rear its ugly head yet again.

According to a study that originally appeared in the Journal of Climate, a total of 438 blizzards took place in the United States between 1959 and 2000 – breaking down to roughly 10.7 on average per year.

But a blizzard doesn’t just bring with it tremendous amounts of snow. Each event is also incredibly dangerous due to poor visibility, terrible road conditions, chilling temperatures that leave people exposed to frostbite and hypothermia and so much more.

The Federal Emergency Management Agency (FEMA) has long held the belief that being prepared during the storm isn’t enough to keep people safe – it’s also what you do both before and after an event that really counts. Being as prepared as possible really is the key to staying safe and for many communities, emergency notification planning and crisis communication often mean the difference between a mild inconvenience and an absolute tragedy.



Keep Your Employees Safe This Winter

While some view winter weather as a welcomed excused absence from work or school, others must still find their way into the office. What they don’t want to encounter on their way are slick sidewalks, power outages, or the worst – inching your way through icy gridlock only to learn after they’ve battled the weather that the office is, in fact closed. “Sorry,” simply won’t suffice.

Reduce your risk for injuries and dissatisfied employees by doing your part to protect and inform them on bad weather days. You may not be able to stop the snow, rain, and wind, but you can ensure every employee has a safe way to an office that is in working condition.



Tuesday, 19 December 2017 16:29


What do hackers actually want? The answer varies widely depending on which types of cyberattacks you're talking about. Here's a look at the most common motives behind attacks by hackers.

I should preface this article by noting that technical communities lack consensus regarding the meaning of "hacker."

Historically, the term has referred to people who build new things or make them better, not break into software systems for malicious purposes.

For the purposes of this article, however, I'll be using "hacker" in the negative sense, to describe people who seek to gain unauthorized access to software or hardware.



It’s mid-December and some areas of the country have already had heavy snowfalls.  Winter storms can have a serious economic impact with disruption of business and travel, collapsed roofs and stresses on municipal governments. It’s useful to know the relative impact of winter storms, and in a recent blog post,  AIR Worldwide spotlights a rating scale called Northeast Snowfall Impact Scale (NESIS), developed by the  Weather Channel and the National Weather Service.

NESIS provides a relative measure of Northeast winter storm impact based on total snowfall amount, geographic distribution, and population density. The scale does not consider the replacement value of property, however.

NESIS has five categories: Extreme, Crippling, Major, Significant, and Notable. The Great Blizzard of 1993 holds the record for maximum NESIS value for a single storm at 13.20



It’s been a busy year for crises for people and companies alike.

Some of the largest, best-known brands found themselves under fire for months, many with a series of scandals that just kept coming. Some of these crises were caused directly by the caught-on-video bad behavior of key executives, such as Uber’s founder and former CEO, Travis Kalanick. A variety of misconduct crises have dominated the headlines, especially the scores of allegations of sexual harassment and assault that continue to capture attention in the news media, in the workplace and with lawmakers.

In many of these crises, the original problems were exacerbated when leaders underestimated the impact of the event and their communications were (rightfully) met with anger and hostility. Yet despite the rising numbers of crises facing organizations of all kinds, the number of those with crisis management and communication plans in place stubbornly hovers around fifty percent. What will it take to convince executives of the need to prepare for, and more important, to prevent crises from happening in the first place? The investment is a pittance compared to the cost to repair and repent.



ISO 31000:2009 on risk management is intended for people who create and protect value in an organization by managing risks, making decisions, setting and achieving objectives and improving performance. The standard’s revision process discovers the virtues of keeping risk management simple.

The revision of ISO 31000:2009, Risk management – Principles and guidelines, has moved one step further to Draft International Standard (DIS) stage where the draft is now available for public comment. What does it mean? And what happened in the revision process since the Committee Draft (CD) stage in March 2015?

The revision work follows a distinct objective: to make things easier and clearer. This is achieved by using a simple language to express the fundamentals of risk management in a way that is coherent and understandable to users.

The standard provides guidelines on the benefits and values of effective and efficient risk management, and should help organizations better understand and deal with the uncertainties they face in the pursuit of their objectives.



As we look forward to 2018, it is a time to reflect on the changes that have emerged in the past couple of years.

Take the 2016 study by Securitas Security Services for example. According to this report, there were two newly emerging trends in business continuity that year — active shooter threats and mobile security in cyber communications. Those trends have only escalated in 2017, and are expected to remain consistent in 2018. Along with these two current trends, look at advancements in technology and supply chain processing in regard to business continuity concerns.



That question usually comes from an executive after some other organization has a business crisis that makes global or national headlines. The question causes anxiety in many Business Continuity Planners.

I remember the first time I got that question. A local business had suffered a lightning strike, cutting power and frying much of their electrical and technology gear.  I can still recall the sudden panic when our CFO asked me that question: “What’s our Plan for that?”

We had no such Plan.  Had we, we should also have had Plans for tornados, hail, parking lot sinkholes, contaminated drinking water and trucks crashing through our lobby doors:  things that had happened to local businesses during the previous year.

Monday, 18 December 2017 15:38

What’s Our Plan For That?

Winter Isn’t Always Pretty

We like to think of the winter scenes we may see on a holiday card – peaceful, joyful, beautiful, and full of cheer. While this may be so, it’s more likely to be chaotic with a few Grinches sprinkled in for good measure. And when it comes to work productivity during the winter months, it can be an even less promising scene.

Winter storms have a history of wreaking havoc on the economy. After a 2015 New England winter storm, economists calculated the hit to the economic output was a staggering $1.25 billion. Much of the productivity loss is attributed to workers simply not being able to get to work due to poor road conditions. Of course, they’ll eventually make up the work over time, but the disruption to normal business operations can’t be understated.

Companies can’t fix the weather, but they can put into place a winter weather communications plan to ensure employees from across their company, remote or onsite, know what to do when bad weather hits. Depending on how your organization is structured, you may have a skeleton crew who has one set of instructions to follow during the office shutdown, executives with a different checklist, and local employees with completely different expectations.

If you want to keep your office running as smoothly as possible, no matter the weather, follow these tips. Your employees will thank you and your administrators, managers, and business leaders will appreciate the forethought.



Friday, 15 December 2017 15:29


A GDPR-Readiness Program With a Unified Governance Foundation Can Increase Productivity While Reducing Costs and Risk

The May 2018 deadline for the EU’s General Data Protection Regulation (GDPR) should have organizations scrambling to roll out GDPR-readiness programs. After all, the regulation applies to most organizations doing business in the EU, non-compliance can result in severe fines, and getting ready for compliance will likely take significant time and effort.

According to a recent global CGOC survey of compliance officers only 6 percent of respondents felt their organizations were ready to comply with the regulation. The survey also reveals that these organizations face many other data protection and management challenges. This article discusses the findings of the survey.

One possible explanation for the lack of progress – as suggested in the survey data – is that many executives are too focused on day-to-day operations to worry about preventing a potential compliance problem down the road. But whether the lack of progress is caused by a mandate to increase earnings, a focus on improving the customer experience, or some other time-sensitive initiative, executives must understand that GDPR compliance isn’t just about risk reductionand cost avoidance. The very same capabilities, strategies and technologies that enable GDPR compliance will help companies meet all their other business goals, including becoming a more efficient, more competitive organization.

And it all starts with a Unified Governance program that provides a single, centralized view of all information across the enterprise and that automates critical information management processes.



Across America, people are winding down for the Christmas season. Some of them will already be looking beyond the holiday’s excesses to 2018, and thinking about what it will bring. AFCOM turned to industry experts to find out what emerging trends they expect to impact the data center environment in the coming 12 months. Their responses covered a broad set of topics from the organizational to the technical. Here are some of the most insightful predictions from those that study data center operations, and those that work at its sharp end.

Good People Will be Harder to Find

One of the biggest challenges facing data center managers in 2018 will be finding the right people fro the job. Rhonda Ascierto, research director for the data center technologies and eco-efficient IT channel at 451 Research, warns that talent management and staff shortages will both present risks in the coming year.

She believes that staff shortages will put a particular squeeze on facilities operations. “A lot of data center facilities staff are aging, frankly, and have been with the job for decades,” she warns. 



(TNS) - The 1992 benzene spill in Superior is a rare example of an incident that prompted Twin Ports authorities to order a mass evacuation.

But the efforts undertaken to alert the public about that hazard bear little resemblance to how a similar situation would be handled a quarter-century later, said Dewey Johnson, emergency management coordinator for St. Louis County.

"Things have changed since Toxic Tuesday," he said. "The expectation of the public is that it comes to device in your hand."

For the first time, area authorities say they have the ability to reach nearly every person in a fixed area, delivering instant information in the event of an emergency situation.



Brains, Braiiiiiins, Braaaiiiiiiiins – these are three things required by both business continuity plans and zombies alike. In AMC’s The Walking Dead, zombies are plentiful.

Business continuity plans, not so much.

The story of The Walking Dead revolves around Sheriff Deputy Rick Grimes and various other characters as they struggle to survive in a world filled with – you guessed it – zombies. It’s not clear as to what exactly caused the viral outbreak that turned most people into mindless “walkers”.

Then again, it doesn’t really matter.



Social media of all types have joined email, telephony and instant messaging as main stream communication tools that are used daily in many individual’s lives. The Pew Research Center estimates that 68% (216.9 million individuals) of US citizens have a Facebook profile, and 21% (66.9 million individuals) use Twitter. These tools have become a key part of the communication landscape and need to be a consideration in any emergency communication solution. With the release of the social media enhancements to the CodeRED Launcher, these tips are especially important to keep in mind:

#1 – Social media has evolved into a viable communication tool

Throughout the September 11, 2001 terrorist attacks in New York City, the primary source of information for the public was television. A case study on the attacks showed, “more than half of Americans learned about the terrorist attacks from television, and only 1% from the Internet”2

Fast forward to Hurricane Katrina in 2005 – “mainstream media sites dominated with 73 percent (73%)”2 of online traffic directed at major news organizations for information and disaster relief donations.2 More recently, during the emergency response to the 2015 San Bernardino attack, online and social media platforms were successfully utilized by local police and FBI members to create a new manner of public information sharing.  Safety Response Reports after the event identified Twitter as a critical component for media operations and credited the team’s utilization of the platform.3

People’s automatic reaction of turning to social media and the Internet to gather information continues to grow and today’s mass notification systems must provide tools for managing these critical touch points.



You Have Event Pages – Now What?

Whether you have Event Pages, or you’re interested in learning more about them, we want to help you understand how Event Pages work in certain scenarios. Once you see them in action, you can probably come up with many more ways they can benefit your organization.

Keep in mind the Event Pages ensure your employees are literally on the same page. With all of the information about an event in one place, you can ensure consistent, accurate information is received by all. You never have to dig through emails to see if you sent or received a message. Everything you and your employees need to know before, during, and after an event is conveniently accessible via a single click of a link. It doesn’t get any easier than that.

Event Pages can be useful for organizing around any event, but here are four to consider:



Thursday, 14 December 2017 14:58


Cold weather has set in in many parts of the U.S., but for those wanting relief, I suggest you think ahead to the month of May. By next May, many nice things will be happening. It will be spring, the flowers will be blooming, and the PGA’s Players Championship golf tournament will take place as it does every May, in Florida at Sawgrass golf course, with its devilish island green.

However, I feel I have a duty to remind you of something else that’s going to happen that month, something almost as devilish as Sawgrass’s 17th hole.

On May 25, 2018, the European Union’s new General Data Protection Regulation (GDPR) goes into effect.

From that day forward, if you are a covered organization this is not compliant with the GDPR’s new standards for safeguarding personal information, you are subject to being hit with heavy fines.



Self-driving cars for mass evacuations? Well maybe not in the very near future, but it may be coming. There are other, more currently viable technologies though that are ready to help emergency managers and public safety officials with mitigation, response and recovery.

Virtual and augmented reality are very viable tools that could be used for mitigation, education, damage assessment and the like. Predictive analysis is an area with great growth potential for the emergency manager/public safety official. These could be helpful tools, if not critical ones, for mitigating floods and fires, as well as developing response and recovery plans.

“Some of what we talk about being emerging technology is just concept and it’s not there yet, but it’s coming,” said Sarah K. Miller of S.K. Miller Consulting. “But some of it, virtual reality or augmented reality, can provide a really great tool for education of the public, emergency managers, whoever. You can do simulations that put people in a disaster scenario.”



(TNS) - The R.I. Division of Public Utilities and Carriers took comments Tuesday on the performance of National Grid after a late October storm that downed trees and power lines in many parts of the state, knocking out power to more than 150,000 Rhode Islanders.

Three days after the storm, thousands of Rhode Islanders were still without power and Gov. Gina Raimondo called for an investigation into the utility's response. That set the stage for Tuesday's hearing, which drew about 30 people.

Among them was Gina Murray, of Johnston, who asserted that a contractor hired by National Grid bungled the job of reconnecting a downed line at her house. The resulting power surges, said Murray, were like a "poltergeist" that caused $7,000 in damage to appliances, boilers and other items.



When airlines undergo mergers and acquisitions (M&A)—and they frequently do—it means merging IT systems, too, if they don’t rebuild IT infrastructure from scratch or run the systems separately. Merging is the choice companies often make, and it can also be the riskiest.

Jumbled IT systems can cause outages and critical system failures, threatening to ground thousands of flights, and could even allow too many pilots to have the holidays off.

“Quick and dirty” fixes that can get you off the ground often turn into long-term solutions—ones that can sideline your operation years from now. One dormant glitch could make your scheduling system decide to play Santa.

Take the time to remap your systems entirely, with all the dependencies, and treat them as one system. Then you can be sure your infrastructure is more reliable, and your disaster recovery plan can recover the full IT environment.

Airline Merger cartoon


These days social media is the little red sports car of communications platforms while email is more like your father’s Oldsmobile.

However, the fact is, email is still the mainstay of internal communications for business.

For large organizations, email continues to offer powerful advantages over other communications platforms. These include its near universal acceptance and familiarity, its ability to keep a record of important communications, and the ability it provides to send attachments.



Big Data, mobility and the Internet of Things (IoT) are generating an enormous amount of data, and data center operators must find ways to support higher and higher speeds. Many data centers were designed to support 1-gigabit or 10-gigabit pathways between servers, routers and switches, but today’s Ethernet roadmap extends from 25- and 40-gigabit up through 100-gigabit, and 400-gigabit and even 1-terabit Ethernet loom within a few years.  As a result, data center operators have an immediate need to migrate their Layer 1 infrastructure to support higher speeds, and that new infrastructure must also deliver lower latency, greater agility, and higher density. In this article, we’ll look at the challenges of moving to higher-speed cabling infrastructure, and how to plan for the future.

 Recent data center trends predict bandwidth requirements will continue growing 25 percent to 35 percent per year. A key impact of this sustained growth is the shift to higher switching speeds. According to a recent study by Dell’Oro, Ethernet switch revenue will continue to grow through the end of the decade, with the biggest sales forecasted for 25G and 100G ports. The shift to 25G lanes is well underway as switches deploying 25G lanes become more commonplace. Lane capacities are expected to continue doubling, reaching 100G by 2020 and enabling the next generation of high speed links for fabric switches. A number of factors are driving the surge in data center throughput speeds.



Wednesday, 13 December 2017 17:27

Planning for High-Speed Data Center Migration

A suicide bomber attempting to blow up the NYC subway, nut jobs plowing through pedestrians, active shooters killing innocents, and deadly wildfires and hurricanes have saturated our news for months.

From workplaces like yours, lives have been taken, injuries sustained, and mental health eroded. Property ruined or damaged. Businesses in shreds. Jobs lost.

Whether you’re a business, nonprofit organization or government agency, you can’t stop crazy incidents like these.

But you can prepare for them. You can respond. And you can recover.



Wednesday, 13 December 2017 17:26

You can’t stop crazy: Part 2

No one should wait for a medical crisis to learn how healthy they are—yet not so long ago, that was the prevailing attitude among many enterprises concerning their IT networks.

Thankfully, system performance management (SPM) as a key IT discipline has evolved. Now it’s possible to do much more than simply monitor uptime; in fact, enterprises have more ways to measure their system health than ever before.

Today, enterprises are demanding a much more sophisticated and 24/7 assessment of all aspects of network performance. SPM software vendors are doing their part by providing applications that collect and analyze a powerful array of operational metrics.



Meet Sophia, who has Saudi-Arabian nationality. There’s nothing unusual about that, except that Sophia is a robot.

She was granted her nationality very recently, the first robot to ever receive such a distinction.

Whether other countries will follow suit or whether other robots will obtain Saudi-Arabian nationality remains to be seen, but the writing is on the wall.

Robots and other forms of artificial intelligence are poised for not just insertion into, but integration with society and business. Given the propensity of machines to continue to work indefinitely without tea-breaks or any other interruptions, the face of business continuity could be changed forever too.

So far, business continuity has been largely about humans monitoring and mending machines. It has also been about humans interacting with humans, as soon as the interaction became more complex than what a cash dispenser or ecommerce website could handle.



(TNS) - Five years ago, the world was stunned by a crime unprecedented in its horror — the shootings at Sandy Hook Elementary School that took the lives of 20 first-graders and six adults.

State legislators reacted to the massacre not only by enacting tougher gun laws but also by earmarking millions to make Connecticut schools safer, including addressing concerns raised after the shooting about access to school buildings, communication failures and multi-agency coordination gaps.

But now a Courant investigation has found that those efforts, started when the pain of Sandy Hook was fresh, have largely dwindled.

Nearly half the school districts in the state are violating at least some aspect of the law requiring them to submit school security information, a Courant review of state records reveals.



Iron Mountain has agreed to buy IO Data Centers, a colocation provider best known for its pre-manufactured data center modules, for $1.315 billion, the publicly traded real estate investment trust announced Monday.

The deal comes at the end of what has already been a record year for acquisitions in the data center service provider space. The year saw industry-shaping transactions like Digital Realty Trust’s $4.95 billion acquisition of DuPont Fabros Technology, the $1.67 billion acquisition of ViaWest by Peak 10, the $2.15 billion acquisition of the CenturyLink data center portfolio by a group of investors to form a new provider called Cyxtera Technologies, and the acquisition of Vantage Data Centers by Digital Bridge Holdings, reportedly for more than $1 billion.

Iron Mountain, the bulk of whose business has traditionally been document management and storage, has been aggressively expanding its data center services business. The IO deal adds four large data center sites to its portfolio and follows its acquisition of the Denver data center provider Fortrust in July and Credit Suisse data centers in London and Singapore – its first two locations outside of the US – in October.



(TNS) - As thousands wait for insurance money to make repairs in the wake of Hurricane Harvey, more than a half-dozen school districts, cities and other government agencies are still awaiting payment from the Texas Windstorm Insurance Association on nearly $60 million in claims from Hurricane Ike.

Texas City Independent School District leads the list with more than $172 million in outstanding Ike claims, followed by Dickinson ISD with $10.5 million and Chambers County with $9.5 million. Three other school districts, two cities and a community college are awaiting payment on additional claims of more than $22 million, according to a Houston Chronicle analysis.

Officials said they have little faith that TWIA - the insurer of last resort - will pay the claims without further legal battles.



Open-plan offices have become the norm for many companies wishing to optimize their space, encourage collaboration between staff and breaking down traditional hierarchies.

However, recent research challenges the idea that open-plan working is a surefire route to productivity. Far from an antidote to the inefficiency of closed-off offices, open-plan working can mean staff are beleaguered with distractions and stifled by lack of personal space.

Gensler’s 2016 Workplace Survey found that 67 per cent of the UK workforce feel drained at the end of each working day due to their office environment. In addition, badly designed offices are suppressing innovation in businesses: although over eight million UK employees work in open-plan environments, many of these do not offer variety or choice, nor are they tailored to specific tasks and practices.

“Enclosed office space is not the enemy,” says Philip Tidd at Gensler. “Moving to a simplistic open-plan may not be the most effective option in today’s hyper-connected workplace.”



(TNS) - During the Sept. 11, 2001, terrorist attacks, first responders in New York City had trouble talking to each other on radios, leading to more chaos that deadly day. Afterward, federal authorities told local agencies to digitize their radio systems to enable such communications, but it's taken the better part of two decades for Dallas to catch up to the costly recommendations.

But if officials in the city and county have their way, Dallas police and firefighters and county sheriff's deputies will soon be able to use their radios to instantly talk to other first responders nearby.

County commissioners this week approved a $68 million contract with the city and Motorola that will upgrade the outdated radios and provide maintenance for 15 years. Because the city of Dallas needs far more radios than the county does, officials said, Dallas is paying 75 percent of the costs, while the county's share is 25 percent. The City Council will vote on the deal next week.



It's no secret that unstructured data is growing at astronomical rates, contributing to the big data deluge that's sweeping across enterprise data storage environments. A new study from Western Digital and 451 Research sheds some new light on the scale of the challenge that storage administrators face each day and how it's fueling the object storage boom.

A 451 Research survey of 200 technology decision makers and influencers, sponsored by Western Digital, reveals that a majority of enterprises (63 percent) and service providers are managing storage capacities of 50 petabytes (PB) or more. More than half of that of that data falls under the unstructured category, existing outside of databases and within files, multimedia content and other formats.

Service providers are particularly being inundated with unstructured data. They reported annual growth rates of 60 to 80 percent, compared to 40 to 50 percent for enterprise users.



Okay, everyone, raise your hand if you looked at the headline of this article and thought, “Wait a second, why is Herrera writing about my business continuity budget when everybody just finished doing them? Could his timing possibly be worse”?

Actually, my timing could hardly be better, and I’ll tell you why.

The worst way to devise your BCM program budget is to do it in a rush just before it’s due. The best way—the way that is mostly painless and delivers the most accurate, realistic, and defensible result—is to work on it bit by bit over the course of the year. I’ll explain what I mean in a minute. For now, just take it on faith that the time to start thinking about your next BCM budget is now.



The explosive growth in data, the digitization of information, and the massive acceleration in public cloud adoption are some of the key drivers behind the growing demand for public cloud-based solutions among enterprises searching for greater flexibility and cost savings, and shifting IT to the OpEx model.

Among the cloud-based solutions that enterprises are pursuing, disaster recovery is emerging as a top IT priority. A recent survey points to disaster recovery, along with workload mobility and archival automation, as a key driver of enterprise cloud adoption, with 82 percent of those surveyed citing disaster recovery as a critical reason to move to the cloud. Meanwhile, another report estimates the disaster recovery as a service (DRaaS) market will grow from $2.19 billion in 2017 to $12.54 billion in 2022, with the managed services provider segment achieving the highest growth.

The majority of DRaaS solutions in the market today have very good recovery mechanisms focused on replicating on-premise systems to the public cloud. This traditional model of failover has served companies well for capturing migration to the cloud during times of outages.



(TNS) - Facing an active shooter situation may be as likely as getting struck by lightning, but that doesn't mean lightning never strikes.

That was the warning given to University of Idaho faculty, staff and students as part of an active shooter response training session Wednesday afternoon in the Vandal Ballroom of the Bruce Pitman Center.

In a short training video from the Department of Homeland Security shown that afternoon, experts emphasized the need for everyone to develop a survival mindset before being confronted with danger. In the face of a shooter, the video outlined, respondents must decide whether to run, hide or fight to stay safe.



Friday, 08 December 2017 15:12

EM: Run, Hide and Fight

A new finish for your old car may look great, but in the end, it may still be a ’71 Pinto.  The cost of the BIA process – writing, distributing, validating, analyzing, reporting, presenting to Management, revising and repeating annually – can be a staggering amount.  Yet a BIA may be no more valuable than that new paint job.

Business Continuity programs rely on BIA’s because ‘standards’ says they must.  BIA data gathering isn’t useless– just time-consuming, and questionably valuable.

  • There’s little proof that BIA’s improve planning, since there’s often little in a BIA to inform individual plan tasks.
  • If it doesn’t improve planning, it won’t improve organizational readiness either.
  • Most enterprise criticalities are already understood within the organization; there’s little point looking for them (again) in a BIA.
  • The man-hours spent on BIA development, completion and analysis is shockingly disproportionate to the value the results provide.



The worst wildfire season in the history of modern California is taking another bad turn, as three major fires have destroyed more than 200 homes and buildings.

Strong winds will be fanning the flames. The state’s foresters have issued a purple wind alert for Southern California, something they have never done before.

This follows a Department of Insurance report that insurers have incurred more than $9 billion in claims so far from the October fires, being $8.4 billion in residential claims, $790 million in commercial property, $96 million in personal and commercial auto, and $110 million from other commercial lines. County-level details here.



One of the biggest trends in business today can be summed up by an acronym that is (almost) completely familiar to anyone who has ever taken their own bottle of wine to a restaurant or house party. It’s BYOD, and it involves employees bringing not their own bottle but their own mobile devices to work and beyond, and using them to perform work functions or access company data.

A 2016 study by Tech Pro Research found that 59% of the organizations surveyed let employees use their personal devices for work purposes.

A study by Syntonic in the same year found an even higher acceptance of BYOD. It determined that 87% of companies depend on letting employees use mobile business apps from their personal smartphones.

Gartner sums up the trend as follows: “Bring Your Own Device: BYOD is here and you can’t stop it.”



(TNS) - The Saline County, Kan., Commission Tuesday approved the purchase of hardware to enhance the 911 system.

Computer Technology Director Brad Bowers said the software from Tyler Technologies will cost $31,435, with the city of Salina paying half of that cost.

Commissioners then heard from Emergency Management Director Hannah Stambaugh that the 911 radio equipment might have to be upgraded.

Stambaugh said other counties that have upgraded from analog UHS to 800 HMz radio communication systems have spent up to $11 million.

"It has the potential of having a pretty hefty price," she said, but it could be good for public safety.



Wednesday, 06 December 2017 19:23

New County Radio System Could be Costly

With mobile and last-mile bandwidth coming at a premium and modern applications needing low-latency connections, compute is moving from centralized data centers to the edge of the network. But there a lot of myths about edge data centers. Here’s what organizations are typically getting wrong, according to Uptime Institute’s CTO Chris Brown:

Myth 1: Edge computing is a way to make cheap servers good enough

The old branch office model of local servers won’t work for the edge; an edge data center isn’t just a local data center. “An edge data center is a collection of IT assets that has been moved closer to the end user that is ultimately served from a large data center somewhere.”



Wednesday, 06 December 2017 19:21

Five Edge Data Center Myths

Building A Strong Strategy From the Ground Up

There is no one-size-fits-all solution for risk management function, how risk is governed varies across industries and organizations. But there are five interrelated principles that underlie effective risk management within organizations in both good times and bad – integrity to the discipline of risk management, constructive board engagement, effective risk positioning, strong risk culture and appropriate incentives.

Below, we discuss these five fundamental tenets integral to ensuring the success of the independent risk management function.



Wednesday, 06 December 2017 16:04

5 Key Principles Of Successful Risk Management

Large and small businesses differ in more than size. Large companies find it easier to adjust headcount and therefore to introduce new skillsets. For small businesses on the other hand, adding just one person can represent a significant change to the payroll.

As IT solutions have progressed, becoming smarter, user-friendlier, more automated and more granular, smaller companies have been able to more finely adjust their investments and operations, helping them keep pace with bigger corporations. So far, IT security has followed a similar evolution. But will the rising trend of threat hunting change things?

The idea behind threat hunting is that some attackers are getting too smart for current IT security technology. They can penetrate defences without being detected, install malware, and develop their attacks at their leisure. However, in doing so, they leave traces that can be picked up by astute human beings, aka threat hunters.



If you lost your home, business or personal property due to Hurricane Irma, you or your family may be struggling to cope with the emotional impact of the disaster. For individuals and families looking to rebuild, the approaching holidays may be especially difficult.

FEMA’s online information, Coping with Disaster provides suggestions that may ease the stress that can follow a traumatic event such as a hurricane, which can be even more challenging around the holiday season. There are special sections on how to recognize signs of disaster-related stress, and on how to help children deal with their emotional needs.

Among the suggestions:

  • Limit your exposure to traumatic news coverage and social media about the disaster until you can handle it.
  • Stay connected with family and friends.
  • Accept the fact that your recovery may take time.

Disasters can leave children feeling frightened, confused, and insecure. Whether a child has personally experienced trauma, has seen the event on television, or has heard it discussed by adults, it is important for parents and teachers to be informed and ready to help if reactions to stress begin to occur.

The staff at the Mayo Clinic say the holiday season causes stress and depression in some people. This may be heightened by the emotional impact of other situations, such as the recent hurricane. They offer some tips on how to cope with stress, depression and the holidays.

According to the National Institute of Mental Health, symptoms of depression may include:

  • Difficulty concentrating, remembering details, and making decisions
  • Fatigue and decreased energy
  • Feelings of guilt, worthlessness, and/or helplessness
  • Feelings of hopelessness and/or pessimism
  • Insomnia, early-morning wakefulness, or excessive sleeping
  • Irritability, restlessness
  • Loss of interest in activities once enjoyed

The Substance Abuse and Mental Health Services Administration provides crisis counseling and support to people experiencing emotional distress related to natural or human-caused disasters. SAMHSA provides toll-free, multilingual and confidential support on its Disaster Distress Helpline. Stress, anxiety, and other depression-like symptoms are common reactions after a disaster. Call 800-985-5990 or text TalkWithUs to 66746 to connect with a trained crisis counselor.

Other resources for helping you and your children cope after the disaster can be found at these websites or by calling the number provided:

  • FEMA: ready.gov/kids.
  • National Center for Child Traumatic Stress: Floods. Phone 310-235-2633 or 919-682-1552.
  • Save the Children: Ten Tips to Help Kids Cope with Disasters, Hurricane Tips for Parents: How to Help Kids.
  • American Academy of Pediatrics: Helping Your Child Cope, Talking to Children about Disasters, How Children of Different Ages Respond to Disasters, How Families can Cope with Relocation Stress After a Disaster.
Wednesday, 06 December 2017 16:01

FEMA: Coping With Holiday Stress After a Disaster

Big data applications are growing as organizations mine their data for insights about clients, suppliers and operations. But as capacities grow and data becomes more sensitive, the underlying storage remains an important consideration.

Here are ten tips on how data storage professionals can stay on top of the big data deluge that threatens to overwhelm their systems.



Wednesday, 06 December 2017 15:39

8 Top Tips for Beating the Big Data Deluge

A Powerful Combination That Could Save Organizations Compliance Costs


The advances in technology in recent years have led to an exponential increase in the volume of data collected and stored. How can investigators maximize data analytics to achieve the most effective—and efficient—results for their clients? The answer lies in harnessing the power of AI to augment the capability and capacity of an investigator. 

From paper records and rudimentary analytics tools to artificial intelligence (AI) and complex data analysis, the world of investigative technology has clearly come a long way.

To identify potential risk areas for their clients, forensic investigators have traditionally relied on limited sets of information from their clients and rudimentary analytical tools. While this may have been previously adequate, the complex business environment, management structures and data deluge in today’s organizations have given rise to unconventional data sources that add important correlations to financial data. If these correlations are ignored, there is the potential that forensic investigators may miss opportunities to mitigate instances of fraud for their clients. To further complicate the challenges faced today, this in-depth analysis, across a vast amount of data, needs to be done in a cost-effective manner.



Today there are more households with mobile devices than with desktop computers.

According to the Pew Research Center, 84 percent of US households have a median of two smartphones, while only 80 percent have a median rate of one desktop or laptop computer. In fact, 95 percent of American adults now use some sort of cell phone. For all the personal data that is being shared across mobile lines, there needs to be greater attention given to the threats of mobile security.

Scope of Security Threats to Mobile Users

Mobile use is only expected to increase due to the dependency on this type of technology. Already, mobile devices are used to access the internet for everything. The Pew Research Center states that 62 percent of users accessed information about their health conditions on a mobile device. In addition, 57 percent use mobile devices for online banking, while 18 percent have submitted a job application on their smartphone.



Monday, 04 December 2017 17:16

Trends and Threats in Mobile Security

The database market has evolved over the decades on the incredible efforts of several single server databases including Oracle, Microsoft SQL Server, PostgreSQL, MySQL, and MariaDB. There are many more; however, these few have furthered the industry with a recipe for building robust transactional systems.

In fact, Oracle and Microsoft SQL Server, by far the two most popular commercial single server databases, are the driving forces behind the combined 65 percent market share for the two companies.

Single server databases provide an architectural simplicity that is hard to beat. You have a single process running on Server 1, then Server 2 provides high availability.

So why change?



Monday, 04 December 2017 17:14

Graduating from Single Server Databases

Prepare to Plan or Plan to Fail

As Audit teams start thinking about their 2018 plans, being able to identify new trends in emerging risk areas that threaten to disrupt enterprise performance over the next year is critical. This explains 12 risks, connected by four major risk themes, that organizations need to have on their radar and what Audit teams need to do to more effectively identify and communicate these risks to their organizations and stakeholders.

Global unpredicted events this year – election results, natural disasters, corporate scandals – have heightened executive and board sensitivity towards risk. Consequently, Audit committees are increasingly tasking Internal Audit to provide assurance over a wider set of risks, beyond traditional financial and operational focus areas.

Annually, CEB, now Gartner, surveys more than 200 Audit heads globally on risks that should be top of mind for organizations in the next year. This year, our Audit Plan Hot Spots report identified four overarching themes that underlie the risks that Chief Audit Executives (CAEs) express as critical to including on their audit plans in 2018:



Monday, 04 December 2017 17:12

4 Major Audit Risk Themes For 2018

Page 1 of 2