Backup and data protection

Where did it start and where is it going?

   

Friday 6th August 2021 | 5 minute read

To understand the future of technology it sometimes helps to understand where it evolved from. Backup has always been a mission critical function – even if senior decision makers sometimes overlooked its importance.

Here’s where it all began…

1960s - We’ve got it on tape

Although developed a decade earlier, magnetic tape backup systems reached the mainstream in the 1960s. Information was copied from live systems to a backup tape each night, which would then be stored offsite. As the data estate expanded, multiple tapes would be added to the backup set – leading to the development of autoloaders that could change tapes automatically.

Incredibly, tape remains an important backup medium thanks to its high capacity, portability and reliability. However, there are several important trade-offs that explain why businesses are keen to move away from tape. The technology is slow for both backup and recovery options, particularly when backup sets span multiple tapes for instance. And the loss or corruption of one tape will compromise the entire backup set.

1980s - Backup inside the server

At the same time, work was underway to further increase server resilience and reduce the risk of data loss. RAID (redundant array of independent disks) arrays entered the mainstream as an essential aspect of system design, using a collection of inexpensive disks configured to replicate information within the same machine. If a physical disk fails, the rest of the array can be rebuilt without losing any data.

RAID recovery is typically quicker and easier than copying data back from an external system. It also has the added advantage of allowing continuity of service – applications will continue to operate even if a disk fails.

Continuous backup

Information is constantly being updated and organisations can no longer rely on snapshots for restoring data; even a lag of a few minutes could have serious consequences if data is lost between backups. Continuous backup is a system where all the data is backed up every time a change is made, so does not have a traditional backup window or schedule. The original technique was patented in 1989 and helped solve the problem of protecting a constantly growing data set in a strict backup window. It developed in parallel to the transition from tape to disk-based backup, bringing additional benefits such as overcoming the capacity limitations of tape.

1990s - Disk to disk

As the size of hard drives increased in the 1990s and the cost per GB reduced, utilising disks for backups became a more common media of choice.

With faster access speeds, hard drives helped to narrow the backup window – and shorten the time to recovery when something went wrong.

Disk was often used to complement tape, allowing businesses to tier their backup regime. Mission-critical data could be saved to hard drive, while less important data sets, or data that needed to be retained for longer periods of time, could be stored on lower cost tape.

2000s - Backup to cloud

The cloud has changed the backup game once again. High speed internet connectivity coupled with infinite storage capacity has made the cloud an obvious target for backup. Files can be automatically stored offsite, adding an important layer of protection for data. And recovery is relatively quick too, particularly when dealing with cloud-based applications.

Choosing cloud backup has helped businesses avoid the physical limitations of tapes and disks – and the capital spend costs of upgrading capacity as the data estate continues to grow.

The 3-2-1 rule becomes a reality

Peter Krogh first concepted the 3-2-1 rule for photography in the early 2000s, but the principles could be applied universally to digital media. The rule advocates that for maximum protection, businesses are advised to maintain at least three copies of their data to assist with recovery (the ‘3’). Original or production data is the first copy, with two more copies making three. The two backup copies should be stored on different media so that, in the event of one being corrupted or destroyed, the other is still available (the ‘2’). The final part of the rule is that one of the two backup copies should be stored offsite to protect it further form anything that might happen to the second copy (the ‘1’).

2010s onwards - Data protection – moving beyond basic backup

The always-on nature of modern IT means that basic backup and restore operations are no longer sufficient. Instead, organisations need a long-term strategy, including things like automatic fail-over options, disaster recovery and regular testing cycles to ensure operational continuity, even in the event of a local disaster.

Previously this would have involved a costly co-located data centre, automatically replicating data and triggering a fail-over in the event of a service interruption. Modern cloud-based services offer similar features, functionality, and resilience at a fraction of the cost – as well as protecting data and operations against loss and outages.

Always-on and always-available, cloud-based disaster recovery (DR) platforms represent the future of backup because there is little-to-no latency; “lost” information can be recovered online in a matter of seconds.

Simplifying immutability

Although the concept of immutability has been around for a while, it’s only relatively recently that it has gained prominence, with the growth of ransomware and the importance of cyber security. Having an ‘immutable’ backup – a fixed dataset that cannot be changed or overwritten – is a useful tool for system recovery because you know that the data can always be trusted. But maintaining and protecting an immutable backup locally can be resource intensive and costly. Cloud backup is ideal for this task – being off-site reduces the risk of access or damage to the data set and ensures that it is fully protected against local disasters.

Into the future

Because the world now runs on data, industry is constantly developing new ways to ensure no detail is lost. Emerging threats like ransomware have only served to emphasise the importance of backup and data protection strategies – and the role that developing technologies can play in protecting your valuable data from loss, theft or exposure.

Everything and nothing has changed

Ever since computers became an essential business tool, the fundamental need to protect, recover, and archive data has not changed. However, downtime today means much more than it did in the past, affecting not only business processes but customer satisfaction and business reputation.

At the same time, there is more data than ever, in more places than ever, and all of it needs to be managed and protected. Today, providers are trying to re-architect their offerings to deliver true data immutability and to leverage the enormous power of the cloud to try and eliminate the potential for data loss.

Now data protection strategies must evolve as quickly as workloads.

Book a backup health check today to ensure you have the coverage you need

Book now!