A Hitch-Hacker’s Guide to the Galaxy – Developing a Cyber Security Roadmap for Executive Leaders
In this blog series, I am looking at steps that your organisation can take to build a roadmap for navigating the complex world of cyber security and improving your cyber security posture.
There’s plenty of technical advice out there for helping security and IT teams who are responsible for delivering this for their organisations. Where this advice is lacking is for executive leaders who may or may not have technical backgrounds but are responsible for managing the risk to their organisations and have to make key decisions to ensure they are protected.
This blog series aims to meet that need, and provide you with some tools to create a roadmap for your organisation to follow to deliver cyber security assurance.
Each post focuses on one aspect to consider in your planning, and each forms a part of the Cyber Security Assessment service which we offer to our member organisations in the UK Higher and Further Education sector, as well as customers within Local Government, Multi-Academy Trusts, Independent Schools and public and private Research and Innovation. To find out more about this service, please contact your Relationship Manager, or contact us directly using the link above.
View all episodes.
Episode 11: Be the master of disaster (Part 1)
“Exactly!” said Deep Thought. “So once you do know what the question actually is, you’ll know what the answer means.”
Douglas Adams, A Hitchhiker’s Guide to the Galaxy
[ Reading time: 16 minutes ]
If you fail to plan…
In the first 10 episodes, we have looked at some of the strategies and tactics you need to employ to ensure that you can prevent a cyber attack or at worst, limit the damage that is caused.
However, your working assumption should be that the likelihood of an attack of some sort is 100%: it’s a question of “when”, not “if”.
And when disaster strikes, you need a plan. As Benjamin Franklin famously put it, “If you fail to prepare, you are preparing to fail.” I’m devoting 2 episodes to the topic of disaster recovery planning. In Part 1, I’ll discuss how well-organised backups are the essential ingredient in your plan.
As simple as 3-2-1
In the event of a catastrophic cyber attack, such as a major ransomware attack, you might be facing a loss of data as well as systems. You might be unable to log in to your computer, access your emails, and communicate with colleagues using the normal organisational channels. Your business processes will fail to operate as normal. You might be staring at a major recovery operation requiring rebuilding your network and systems.
Your strategy of last resort in such an event is your backup system. This is what is going to get you back on your feet, but all too often it is left to the IT team to work out how best to do this for the organisation. Given its criticality when you really need it, this should assume much more prominence and the strategy and implementation should be owned and directed by senior leaders.
It’s not exactly rocket science but it does require some work to be effective and dependable. We’ll sum it up as the rule of 3-2-1.
3: make sure you have more than one backup. Ideally, you’ll have 3. In old gendered language, this was sometimes referred to as “grandfather-father-son”—three generations of backups.
2: store your backups on 2 separate media. So if one system fails, you’ve still got access to backups by another route. Backing up the backup.
1: keep 1 backup copy off-site. So that if there’s a major incident which destroys your server room, you can still recover (once you’ve replaced your backup system).
Keep it lean and clean
Most organisations run their backups daily (or nightly) 7 days a week, so they have 7 generations. That means you can restore systems and data back to any point in the last 7 days.
After that, it’s common to keep the 7th daily backup as a “weekly” backup, and to keep 4 weekly backups through the month, then keep each 4th week’s backup as a “monthly” and keep 12 of these through the year. Some organisations will opt for termly backups, and some will keep “yearly” backups going back several years.
There’s a logic to this strategy, but it’s largely to accommodate the fear (which is often more perceived than actual) of needing to get something back from a year or more ago which might have been deleted inadvertently. There could be a valid business imperative for this. But it results in a tendency towards hoarding data as a solution for weak data management processes.
Backups are not the same as archives, and you shouldn’t treat the backup system as your archival solution.
What’s more, hoarding costs. The more you keep, and the longer you keep it, the more that costs. (I’ll say more on that later.)
Once you’ve decided what to backup, you can decide where to store it.
Horses for courses
The choices are disk or tape. Disk is generally preferred because it’s much faster to read and write than tape, so backup and restore operations are faster. Despite the falling costs of disk storage, tape is still much cheaper, and because it’s removable media, you can have as much backup storage as you need just by adding more tapes.
You have to think about the total cost of ownership, though. To use tapes you need to invest in a tape drive (or two, for resilience) and usually a caddy to rotate the tapes as they are used. The system needs manual intervention at least weekly to remove tapes to put in the safe somewhere else. Disks just take care of themselves.
In most organisations, servers are virtual rather than physical and servers (or virtual machines) are backed up as a whole, rather than just the individual data files they hold. This simplifies the process of configuring backups and speeds up restoring data and systems. This convenience comes at the cost of storage—a whole virtual machine requires more storage space than just the data files it holds.
You need to balance your backup storage requirements and cost against that convenience.
Take it away
The final pillar of the 3-2-1 principle is keeping backups in a separate location from where they are made. Usually (and ideally) that means “off site”. If you’re going to keep media on-site, you need a fire-proof safe located on premises somewhere other than where the backup system lives. A different building as a minimum, or a different campus location if that’s an option. Or you might need a third party contract for off-site secure storage.
Increasingly, cloud storage is becoming an option for storage of backups. It’s convenient, and ticks the boxes for separate media as well as being off-site. Some cloud backup vendors will even provide you with cloud storage to restore backups into in the event of a disaster. Unfortunately, there’s a cost for this convenience, and it’s chargeable by the gigabyte. Like so much of the IT landscape which is moving to a subscription model for services, it’s on the revenue budget, which poses a challenge for some organisations.
As an example, 45TB of cloud storage with one leading cloud vendor will cost almost £1,750 a month. That’s over £20k annually, and works out at over 45p per GB, over 230 times as expensive as tape. So, for all the benefits of cloud storage, you need to think carefully about the costs and target what data you choose to back up to the cloud.
Spot the difference
There are several different ways of actually backing your data up.
The obvious way is to make a copy of everything. That’s called a “full” backup. It’s always the first backup you need to make.
After that, instead of backing everything up every time, you can reduce the time and storage required by only backing up data that’s changed (a “differential” or “incremental” backup). That could be a big saving in both time and storage, but how much will vary for each organisation.
As always, there’s a trade-off for these efficiencies, which is that it takes longer to restore data when you use differential or incremental backups, as you are restoring data from 2 or more backups. So your recovery time will be affected by this: more of that in Part 2.
Backups against the wall
Because backups are a centralised store of your organisation’s data, they are a key target for cyber attackers. Why spend time hunting for valuable data if you can get hold of the backups and browse through them at your leisure?
It’s a given that all your backups should be encrypted and you shouldn’t be using a backup system that doesn’t do this by default, using a strong encryption standard (like AES256). Don’t forget that you need to protect the decryption keys that unlock the backups. That means having a secure vault somewhere. Many backup systems will store these securely online for you; if not, a reputable password management tool (I mentioned these in episode 3) or online vault can do the job.
Because this is your recovery option of last resort, if an attacker compromises your backup system (by scrambling the backups or locking you out of the backup system itself) then you’re in real trouble. If your backup system is a physical server on-site, you need to put really strong protections around it: some of the measures I mentioned in previous episodes (VLANs, MFA, break glass accounts). Cloud-based systems generally have these protections already in place, including “immutable” backups. Read on…
Set it in stone
One of the features of cloud storage which is becoming increasingly important is “immutable storage”. That means that the data is prevented from being changed or deleted. That’s not generally true of on-premises backup systems (although it is a feature in some). And it’s important for 2 reasons.
Firstly, and most obviously, it means that no malicious actor can manipulate the backup, including a member of your IT team. Even the keys to the kingdom (see episode 9) won’t unlock immutable backups.
Secondly, an immutable backup provides an audit trail for data where you might have regulatory or compliance requirements. There’s no scope for someone to delete an email or file if it’s in immutable storage.
From cloud to cloud
It’s important to realise that “cloud first” doesn’t mean “cloud only”. While cloud hosted systems provide resilience, data that is stored in the cloud is not immune from deletion. Whilst OneDrive and SharePoint have a recycle bin that keeps deleted files for 30 days, once files are deleted from there , they are gone for good.
You need to include cloud backups—that is, backups of cloud data—in your backup strategy. That usually involves a separate contract with a cloud-to-cloud backup service provider.
Always verify
Back in episode 8, I introduced the principle of zero trust with the Russian proverb “Never trust, always verify”. It applies very directly to your backup system. A failed backup isn’t going to help you get out of trouble when you need it, so you need to check that your backups have completed successfully. Most backup systems have an option to verify the backup once completed, and some will automatically repeat the backup process if it’s failed. You should be checking the backup status routinely.
A better way to verify that your backups are working is to use them for a test recovery, which I’ll say more about in Part 2.
Don’t let the data overstay its welcome
A word about what to back up, and how long to keep backups.
Most organisations take an approach which we might reasonably accurately summarise as “backup everything”. Better safe than sorry. As we’ve seen, though, that can get expensive, especially if you want to benefit from immutable cloud storage.
You need to view your backups as a component of your disaster recovery planning, not as an archival strategy. You should view your backups as the means to recover your systems and business data. It’s probably unlikely that you’ll ever want (or need) to restore any of your key line of business systems to how they were more than about 1 month ago, let alone 6 months or more. The implications of rebuilding systems which are out of date by that amount of time are mind boggling.
Identify the data you want to archive for longer than that, and manage that separately from the recovery process. Of course, that implies you have a working Information Asset Register!
Don’t forget that any personal data you store in backups is still subject to Data Protection legislation. A subject access request might require you to find and disclose data held in backups, and a person’s “right to be forgotten” means that, as a minimum, you need to have processes in place to address this when you restore backups containing personal data.
These regulatory requirements are easier to manage if you backup data for archive separately from line of business systems, and by adopting a “data minimisation” approach, so that you are only backing up essential data and only for as long as is absolutely necessary. That helps with minimising your regulatory risk.
A Final [Deep] Thought
In the next episode (Part 2), I’ll explore how to build a Disaster Recovery Plan effectively for restoring from backups.
For now, you can take useful steps forward by checking out your organisation’s backup plans. Do you have a backup strategy? Does it follow the 3-2-1 principle (3 backups, 2 media types, 1 off-site or off-line)? Do you have immutable backups as part of your strategy? Are you backing up your cloud storage? Are you verifying that your backups are successful? Do you do any backup testing?