Lax practices belie backup confidence

A major survey of UK companies’ backup and recovery provisions has found security and procedural best practice are sorely lacking.

Hard Disk failure

The latest research into the backup and recovery habits and attitudes of UK firms has revealed weak handling of governance and security despite overall confidence in disaster recover (DR) IT systems and practices.

The annual Databarracks backup and recovery survey questioned 500 companies and found overall, 91 per cent claim to be confident in their provision of these systems.

Advertisement - Article continues below

However, nearly three quarters (74 per cent) do not use encryption or replication and do not take backups offsite, yet were still confident of their DR capabilities despite skipping these steps.

Perhaps more alarmingly, the high overall confidence rating belied the fact that two thirds (67 per cent) still consider their backup solution to be secure even though they do not keep or check backup logs and do not test their restores to ensure that they are working properly.

This false confidence betrayed a gap in understanding of the DR technologies available and the importance of having business continuity and DR plans in place, according to Databarracks managing director, Peter Groucutt.

"These days it is not enough to blindly trust that backups are being completed properly," he said. "Businesses and the regulatory environment in which we all exist demand fast and reliable recovery time objectives for IT systems."

Advertisement - Article continues below

The confidence UK firms have in their backup and recovery systems could be potentially be attributed to the fact that hardware failures as a source of data loss have dramatically fallen compared to previous survey findings.

Advertisement - Article continues below

The survey revealed that 27 per cent of companies' data loss was caused by human error, 26 per cent from hardware failure and 19 per cent from software failure. Previous, 2006 Databarracks reports indicated that hardware failure caused the majority of data loss at 61 per cent, while human error only accounted for 2 per cent.

Although this change could possibly be attributed to improvements in software and hardware resiliency in the last two years, Groucutt said that should leave no room for complacency.

"Such is the pace at which a modern company transacts business these days that those who are without their IT for any great length of time are losing serious money," he added. "Customers are also becoming a lot more aware of the information that companies hold and are getting less and less forgiving about delivery disruption, let alone the thought of their sensitive data being transported in a an unencrypted and readable format."

Featured Resources

Top 5 challenges of migrating applications to the cloud

Explore how VMware Cloud on AWS helps to address common cloud migration challenges

Download now

3 reasons why now is the time to rethink your network

Changing requirements call for new solutions

Download now

All-flash buyer’s guide

Tips for evaluating Solid-State Arrays

Download now

Enabling enterprise machine and deep learning with intelligent storage

The power of AI can only be realised through efficient and performant delivery of data

Download now



10 quick tips to identifying phishing emails

16 Mar 2020
mergers and acquisitions

Panda Security to be acquired by WatchGuard

9 Mar 2020
internet security

Avast and AVG extensions pulled from Chrome

19 Dec 2019

Google confirms Android cameras can be hijacked to spy on you

20 Nov 2019

Most Popular


Zoom kills Facebook integration after data transfer backlash

30 Mar 2020
Server & storage

HPE warns of 'critical' bug that destroys SSDs after 40,000 hours

26 Mar 2020

These are the companies offering free software during the coronavirus crisis

25 Mar 2020
high-performance computing (HPC)

IBM dedicates supercomputing power to coronavirus research

24 Mar 2020