A Radically New Approach to Backup and Recovery in the Data Age
Nobody likes to lose things. As long as humans have owned physical and intellectual property, they’ve had concerns about storage. Good storage seeks to answer the question: how can you protect an asset from theft or damage yet maintain accessibility for yourself and others who might need it? Furthermore, how can you maximize loss prevention?
Despite the many technological advances in data storage and recovery, traditional backup remains flawed – especially at scale. For organizations backing up petabytes of data, achieving data resilience is a significant and growing problem – when those scales reach tens or hundreds of petabytes, the problem is mostly insurmountable.
Backup is a critical means of recovery, yet decades-old legacy enterprise backup solutions were not designed to handle the size and complexity of today’s data. And this problem continues to grow. Terabytes of data are rapidly becoming petabytes to exabytes of data and beyond. According to IDC, the amount of new data created, captured, replicated and consumed is expected to more than double in size from 2022 to 2026, and Statista Research predicts the world’s collective data creation will reach more than 180 zettabytes by 2025.
Organizations need their data to be resilient and continuously available, with the ability to spring back seamlessly to reduce the risk of critical data loss and the impact of downtime, outages, data breaches, and natural disasters. Resiliency also prevents downtime when performing upgrades, data migration, and planned maintenance.
With data growing at an exponential pace, organizations need data resilience at scale. Keeping large-scale data sets secure and resilient is a significant challenge and traditional backup often proves unviable.
As data grows, so do vulnerabilities in the form of corruption, malware, accidental deletion, and more. The time it takes to find lost data with traditional backup systems increases with the amount of backup data stored. IT departments are constantly pulled into the task of data recovery.
Continuous data availability has not been possible due to the weaknesses of traditional backup. Business leaders have been forced to accept levels of data loss, measured by recovery point objectives (RPO), and downtime, measured by recovery time objectives (RTO).
Traditional backup was a solution for millions of files – but we’re in an age of billions of files.
Traditional backup works by scanning a file system to find and create copies of new and changed files. However, scanning takes longer as the number of files grows – so much so it’s becoming impossible to complete scans within a reasonable time frame. They usually run at night when systems are likely to be less volatile.
In addition, the process occurs at set intervals, which means any change before the next scan will be lost if there’s a system failure. Traditional backup does not meet the objective of zero data loss. Recovering data in petabyte-sized repositories is time extensive. The recovery process is not what it should be – it’s tedious and slow.
When users want to recover data, they will typically ask an IT administrator to help recover it. The administrator will then ask for the path and names of the missing files, along with the date and time they existed. Many people will not remember those exact details, and so begins a process where different backup sets are restored one after another and inspected until the missing or damaged files are found. That process can take hours, days, or longer to recover data – a process which is inefficient and costly. This inefficiency compounds when there are many files to find and restore.
Achieving data resilience at scale in today’s data-driven world demands a radical new approach.
With the growing amount of data, making it secure and resilient is becoming more challenging. A new paradigm is needed to address the magnitude of modern data demands, one which maximizes data resilience at scale and revolutionizes today’s broken backup paradigm. IT leaders must shift their focus from successful backups to successful recoveries.
Traditional backup is independent of the file system, but a radically new approach would merge the file system and backup to be one and the same, not two separate entities. As a result, every change in the file system would be recorded as it happens, and end users could recover lost data without IT assistance. Plus, finding files would be easy, regardless of when they may have existed and across the entire time continuum.
Such an approach would redefine enterprise storage by converging storage and data protection in one system. This model would increase data resilience and provide a strong first line of defense against ransomware cyber locking, enabling organizations to recover compromised data easily and swiftly. It would allow users or IT administrators to go back to any point in time to recover needed files – even in the event of a cyberattack where files have been encrypted.
An analogy is insuring a mountain house against fire. One strategy involves risk mitigation – removing trees around the house, ensuring fire breaks, cleaning roofs and gutters of dead leaves, debris, and pine needles, and removing any flammable material from the wall exterior. On the other hand, a person could passively take no measures and just wait for the house to burn down, hoping the insurance is adequate to recover the loss. The first approach is proactive – avoid the disaster in the first place. The second is reactive – a bad thing happened, and now we will spend a lot of time, effort, and money and hope we can recover to where we were before the event.
This scenario exemplifies the difference between recovery from a state of continuity (continuous data access) and discontinuity (a disaster strikes). A proactive approach – one which involves continuous inline data protection – would eliminate the cost and business impact of lost data and allow for the following benefits:
- Data security and resilience at scale with extremely fast data recovery and zero data loss, accomplished by uniting the file system and data fabric.
- Continuous data protection makes it possible to achieve continuity of service at scale with the ability to instantly unwind the file system to appear as it was at the selected point in time before the data corruption, hardware failure, or malicious event.
- The ability to roll back ransomware attacks and provide the first line of defense against corporate loss and strong protection against criminals holding a business and its data hostage in, at most, minutes rather than days, weeks, or more.
- Expedited data recovery enables users to find and recover what they need interactively – a “do it yourself” data search and recovery process which eliminates the need for IT intervention.
Backup which sits within the operating system data path is requisite to enabling data resiliency at scale. This method could provide unprecedented data protection, making it possible to approach the ideal of zero RTO, giving users control of searching and recovering data immediately without IT assistance, and an RPO of zero, eliminating the high cost and impact of data loss and interrupted data access.