As an industry professional, you're eligible to receive a printed copy of the journal.

Fill out your address below.






Please reset your password to access the new DRJ.com
Reset my password
Welcome aboard, !
You're all set. We've send you an email confirmation to
just to confirm you're you.

Welcome to DRJ

Already registered user? Please login here

Existing Users Log In
   

Create new account
(it's completely free). Subscribe

Roughly 80% of all data is unstructured data, according to a study from Deloitte. And according to a 2018 study from 451 Research, “Unstructured data continues to grow faster than traditional database data for customers in most vertical markets.” Protecting this rapidly growing data set poses a serious, ongoing challenge to the enterprise.

Certainly, backup and disaster recovery technologies have made a great deal of progress over the last few years in reducing RPOs and RTOs. Increasingly, organizations no longer backup data only at night, but can instead do so many times per day without affecting the performance of the production environment. Recovery, too, can now often be measured in mere minutes rather than hours.

But the backup process is still a significant administrative and management burden, and backup systems remain an expensive bolt-on to storage systems. Especially in large organizations, a team of personnel are dedicated to conducting backups, managing the process and ensuring backup integrity. Faulty or corrupted backups that cannot be restored properly are still a significant problem with many systems, as are backups that contain malware. Many new ransomware attacks deliver code that worms its way through systems for days, even weeks before detonating to cryptographically shred or hold hostage an organization’s data. As a result, recovering data to the point immediately before the attack was launched may simply set the attack in motion all over again.

A world without bolted-on backup

Ideally, there would be no need to buy a data protection solution to protect data, and the storage system would protect itself. After all, no responsible enterprise deploys a storage system without the need to protect the data it stores and recover from disasters if necessary. So why should there be separate systems for storage and backup?

Thankfully, modern global file systems can take advantage of the unique properties of the cloud to provide an elegant storage solution that enables organizations to store, access and share file data without deploying any additional backup and DR systems at all. In 2020, the best backup is no backup.

Cloud computing has completely changed nearly every aspect of enterprise IT, including storage. Initially, the cloud was primarily used to take advantage of the cloud’s essentially unlimited capacity and reduced storage costs. But the object store architecture of the cloud was not well-suited for storing production file data on its own. Object storage is not designed to handle data that changes frequently, as does file data, and the distance between the cloud data center and the end-user introduced unacceptable latency.

The advent of cloud-based file systems built on top of this object store enables IT to leverage the cloud for primary storage. And with the assistance of virtual or physical edge appliances that cache the most frequently used data, cloud-based file systems can provide the same performance end-users have come to expect from traditional network attached storage (NAS) arrays.

Cloud-based global file systems

These global cloud-based file systems now enable functionality that was previously not available to traditional, on-premises NAS arrays. Teams can collaborate even on very large files across continents without having to wait for the data to download — to the end-user, it appears as if the file is stored locally, even though its true home is in the cloud. Corporate IT can manage file data globally, eliminating local silos. And thanks to the highly redundant nature of the cloud, the system is self-protecting, eliminating the need for a separate backup system.

These cloud-based file systems take immutable snapshots to capture changes, which are sent to the cloud where the gold copy of multiple versions are kept. In this way, the system creates a permanent record of all file contributions and versions. These snapshots can occur every 5 minutes for very active data. And because they are extremely space efficient, the global file system can store them all in the cloud without incurring any significant cost.

If these snapshots are written to the cloud as Write Once Read Many (WORM) objects, this prevents any data from being corrupted or overwritten. And by maintaining separate metadata versions for each snapshot, restoring a file or even a volume that clocks in at multiple terabytes stored takes just seconds to complete. There’s no need to undergo a full restore or migration process. The system simply points at the relevant version, and end-users can access their data.

Public cloud providers like Google Cloud Platform, Microsoft Azure, and Amazon Web Services (AWS) operate on a highly redundant architecture, which enables them to maintain high levels of availability and data durability that even the largest, most advanced enterprise data centers cannot match. Data is copied multiple times in multiple locations throughout the cloud, and with the file system taking frequent snapshots of the file data, backing up files to tape or disk becomes unnecessary.

As a result, IT organizations can get RPOs of 15 minutes or less and RTOs measured in minutes, even for extremely large data sets. Even in the case of a massive natural disaster, in which a corporate office is completely destroyed, end-users can still access all of their data from anywhere in the world. All IT needs to do is set up a virtual appliance in the cloud relatively close to employees and then direct their endpoints to the new appliance. Even if local appliances melt down completely, it only takes 15 minutes to fully recover and provide access to all the data in the cloud.

Just as important, a cloud-based global files system is more effective in recovering from ransomware than a traditional backup. Because the entire version history of every file is accessible and because snapshots are stored in an immutable, read-only format (WORM) in the cloud, previous versions cannot be encrypted by ransomware, so there’s no danger of an attack permanently hijacking file data. In a traditional setup, IT must take special precautions to ensure there’s an air gap between the production network and backups to ensure they are always safe from malware, a necessary practice that also increases RTOs.

Traditional file systems can attempt to use snapshots to accomplish this same feat, but in an on-premises environment, capacity is limited, and snapshots take up valuable primary storage space. As a result, IT has to decide how many days of snapshots the organization can afford to retain, which typically means no more than 48 hours. If ransomware infected systems a week before it activated, the snapshots are compromised and would simply set the attack in motion once again.

There’s no longer any need to deploy a separate backup system for on-premises NAS with the advent of self-protecting, cloud-based global file systems. Not only does IT no longer need to dedicate valuable time and resources to backup management, but the organization will also have faster RPOs and RTOs as well as the ability to recover from ransomware attacks in minutes. The best backup truly is no backup.

ABOUT THE AUTHOR

Russ Kennedy

Russ Kennedy is the chief product officer at Nasuni, where leads the company’s product management, planning, and roadmap efforts. He has a maniacal focus on ensuring Nasuni customers derive maximum benefit from our technology. Kennedy is a well-known and highly regarded storage industry executive, with more than 25 years of experience developing software and hardware solutions to address exponential data growth. Prior to Nasuni, Kennedy directed product strategy at private cloud object storage pioneer Cleversafe through its $1.3 billion acquisition by IBM.

Stadiums Making a Play for Better Digital Infrastructure
There is no doubt that modern and emerging technologies have drastically changed how fans experience live events. Fans want to...
READ MORE
Healthcare Cybersecurity in the Age of COVID-19: A Once-in-a-Lifetime Level of Distraction
The healthcare industry has always been and will continue to be the target of ransomware and other cyberattacks. These breaches,...
READ MORE
Why You Can’t Gamble on Data Protection
With major data breaches like Marriott’s making headlines and Facebook’s data privacy scandals shaking consumer confidence, data protection should be...
READ MORE
We’re in the Cloud, So We’re Covered, Right?
For the past decade, conversations within the IT community have been largely dominated by talk of “the cloud” and all...
READ MORE