Start a 14-day trial
Strategy

The 3-2-1 rule, reconsidered

Editorial lede: we still teach the 3-2-1 rule, but the underlying assumptions — tape, on-prem, single-machine workflows — are mostly gone. Here's what the rule is really trying to say.

If you have read anything about backup in the last decade, you have read the 3-2-1 rule. Three copies of your data. Two different kinds of media. One off-site. It is the most quoted piece of advice in the field, and rightly so. It is simple. It rhymes. A friend at dinner can remember it on the walk home.

It was also written, in its current phrasing, by the photographer Peter Krogh in his 2009 book The DAM Book, and the working conditions Krogh was describing look very little like the ones most creators and professionals deal with today. The rule is not wrong. It is advice from the era of DVDs, LTO tape, and single-machine photography workflows, carried into an era of solid-state drives, always-on cloud storage, and ransomware that is perfectly happy to log in with your own credentials. The spirit of 3-2-1 is correct. The letter of it needs a firmer look.

This guide is the firmer look. The goal is not to discard 3-2-1. The goal is to finish the sentence it started.

Where the rule came from

Peter Krogh’s The DAM Book — short for Digital Asset Management — was written for working photographers at a moment when the shift from film to digital was mostly finished but the tooling was not. Photographers were generating thousands of raw files per shoot. How to keep them and find them again in five years was a genuinely open question. Krogh’s book became the default answer. The 3-2-1 rule was a small piece of a longer argument about archiving and the lifecycle of a digital image.

In 2009, “two different media” was a literal instruction with literal options. Three copies typically lived on an internal hard drive, on a second drive or RAID, and on optical media such as DVD or Blu-ray, or LTO tape for studios that shot a lot. These were genuinely different technologies with genuinely different failure modes. A hard drive could fail mechanically. A DVD could delaminate. An LTO tape could be eaten by its own drive.

“One off-site” usually meant a drive in a desk drawer at the office or a hard drive at a trusted friend’s house. Cloud backup for individuals existed but was slow, expensive, and not widely trusted. The off-site copy was a physical object you moved with your hands.

Under those conditions, 3-2-1 was precise and correct. The count of copies protected against accidental deletion, the mix of media protected against category-wide hardware failure, and the off-site copy protected against the building burning down. Each clause was doing real work.

What the rule gets right

The rule gets three things right that no one should throw away.

The first is redundancy. A single copy is a single point of failure, and the probability of any given drive surviving five years is lower than most people believe. Three copies is not paranoia. It is arithmetic. At an annualised drive failure rate of two to three percent, a single-copy strategy over a working lifetime is closer to a guarantee of loss than a hedge against it.

The second is different failure domains. The original phrasing calls this “two different media,” which is narrower than the underlying idea. The point is that your copies should not be able to die together. If your laptop and your external SSD share a power strip during a surge, you have one copy, not two. If both drives are the same model from the same batch, you are closer to one than two. The rule is gesturing at independence of failure.

The third is off-site. Fires happen. Floods happen. Burglaries happen. Moves happen, and during moves, boxes get left in rental vans. The off-site copy survives events that take out an entire location at once. No amount of local redundancy substitutes for a copy that is simply not in the building.

Redundancy, independence, geographic separation — these are the durable core of 3-2-1. Any replacement has to preserve them.

What’s changed since 2012

Three things have changed about the landscape Krogh was writing into, and each one shifts the math.

The first is that consumer storage media have converged. In 2009, hard drives, optical media, flash drives, and tape were genuinely different technologies with genuinely different failure modes. In 2026, almost everything a creator touches is a flash device. The SSD in the laptop, the portable SSD on the desk, the NVMe module in the external enclosure, and even the “hard drive” many cloud services quietly use are solid-state at the media layer. They fail in similar ways, for similar reasons, on similar schedules. “Two different media” is not the protection it used to be, because the two media are often the same media in different housings.

The second shift is the economics of cloud storage. In 2009, keeping a terabyte off-site in the cloud showed up on a small studio’s line-item budget. By 2026, a terabyte of continuously backed-up storage costs less than a reasonable dinner, and multi-terabyte cloud backups are routine for individual creators. The question has shifted from “do you have an off-site copy” to “is your off-site copy the right kind of off-site copy.”

The third shift is the one that matters most. Ransomware. In 2009, the dominant threat to a photographer’s archive was a drive wearing out, a DVD going bad, or a laptop getting stolen. In 2026, the dominant threat for small creative shops is an encrypted-and-locked machine with a ransom note on the desktop. The rule assumes, correctly, that whatever destroyed your working copy is a random, stupid event — a drive, a fire, a flood. It does not assume an intelligent attacker who will notice your backups and try to destroy those too. For an adversary with your credentials, an off-site copy in the cloud is not automatically out of reach. The rule does not protect against an attacker who can log in to your backup.

This is where the rule runs out of room. Redundancy is necessary. Off-site is necessary. Neither is sufficient against an attacker who can authenticate.

The ransomware-era amendments

The security community noticed this gap about a decade ago and has been proposing amendments ever since. The most commonly cited is 3-2-1-1-0: three copies, two kinds of media, one off-site, one immutable or air-gapped, and zero errors on test restore. The numbers are not elegant but the thinking behind them is right.

The “one immutable” clause is the important one. An immutable backup is a copy that cannot be altered or deleted for a defined retention period — not by an attacker, not by a misconfigured script, and not by you. It is usually implemented at the storage layer with a feature called Object Lock. Once a snapshot is written under an Object Lock retention policy, the bytes are fixed for the length of that policy, and no amount of credential theft can shorten the window. This is what our macup Cloud service does by default for every byte it stores. Given the threat landscape most of our users face, we concluded it is no longer optional.

The air-gapped alternative is the offline version of the same idea: a copy physically disconnected from any network. An external SSD in a desk drawer, rotated on a regular schedule, is an air gap. Air gaps work because an attacker cannot reach what is not plugged in. They also work only until you plug them back in, which is the moment most home air-gap strategies fail.

The “zero” in 3-2-1-1-0 is the clause that matters most and is almost always ignored. Zero errors on test restore. A backup you have never restored from is a hypothesis, not a backup. You can have three copies across three continents on three media, and if none of them actually reads back, you have nothing. The rule is load-bearing on that zero.

Three tested copies. Two failure domains. One immutable. Test the restore.
The macup team

The multi-device era

There is a second thing 3-2-1 never anticipated, and it is not a threat — it is a workflow. The rule assumes one computer. In 2026, a working creator typically has several.

Consider a realistic setup. A MacBook Pro for travel. A Mac Studio at the desk for heavy lifting. An iPad for on-site review. A portable SSD that carries the active project between all three. A NAS at home for archival material and music libraries. Five devices, all participating in the same workflow, with files moving between them depending on where the work happens that day.

The question “do you have three copies” is harder in this setup than it was in 2009. Is the copy on the MacBook the same copy as the one on the Mac Studio, or are they two copies that happen to sync? Is the portable SSD a backup or a working drive? Is the NAS backed up, or is it the backup? The rule does not answer these questions. It assumes a centre of gravity — a single working machine — and radiates backups outward from there.

A modern equivalent has to sit underneath a distributed working set and treat it as one thing even when it lives in several places.

A proposed updated version of the rule

Here is what we teach our users, stated plainly.

Three tested copies. Two failure domains. One immutable. Test the restore.

Each clause does a specific job, and none of them is decorative.

Three tested copies preserves the redundancy argument from the original. Three survives the loss of any two, and protects against the case where one copy turns out on inspection to be unusable. “Tested” is the change. A copy you have not verified is not a copy. Tested means you have, within the last quarter, restored a non-trivial amount of data from it, confirmed the file opens, and compared it against the original. This is the only version of three copies worth counting.

Two failure domains replaces “two different media” with the idea the original was reaching for. A failure domain is any shared dependency that could take down more than one copy at once. A single drive model is a failure domain. A single power strip is a failure domain. A single building is a failure domain. A single cloud account, as recent incidents have taught many small businesses, is a failure domain. The rule is satisfied when your copies span at least two things that cannot plausibly all go wrong at once. In practice that means at least one local copy in your building and at least one off-site copy in a service you do not manage yourself.

One immutable is the ransomware clause. One copy has to be something an attacker with your credentials cannot delete. In practice that means either a cloud copy held under a retention policy that cannot be shortened — Object Lock, compliance mode, retention hold — or a genuinely offline copy that is not plugged in. Either works. Both is better. Our bias is toward the former, because the second requires you to remember to rotate a drive, and humans are bad at that. See our security overview for the specific protections we apply.

Test the restore is the zero clause, promoted to a verb because it needs to be one. A backup you have not tested is a backup you do not have. The correct cadence is quarterly, and the correct test is restoring a recent, meaningful file — the real thing, not a token — to a location other than the original, and opening it in the application that created it. If any step fails, the backup has failed.

Applying the updated rule to a real Mac setup

Consider a working example. A video editor with a MacBook Pro, a 4-terabyte external SSD for current project storage, and a Synology NAS at home for completed projects and music.

The working set lives on the MacBook Pro and the external SSD together — application and projects on the internal drive, media on the external. The external SSD is a working drive, not a backup.

The first copy is a continuous backup of both drives to the NAS at home. Same building, different hardware, different failure mode. This is the local copy and it handles the everyday case: deleted file, corrupted project, app behaving badly. Fast to restore from, which is the point.

The second copy is a continuous backup to a cloud service with Object Lock retention, running alongside the NAS backup and covering the same working set. This handles the off-site clause and the immutable clause in one stroke. If the building burns down, this copy survives. If an attacker encrypts everything in the building including the NAS, this copy survives. For most creators, it is the single most important piece of the setup.

That is three copies across two failure domains, one of them immutable. The final clause is on the editor: a quarterly restore test. Pick a recent project from the cloud copy. Restore it to a scratch location. Open it in the editing application. If it plays back, the backup works. If it does not, you have a problem to solve in a quiet week, rather than later, during a disaster.

The difference between this and a by-the-book 3-2-1 implementation is small in practice and large in behaviour. The editor is not counting DVDs. They are counting failure domains and verifying the copies they think they have are actually there. For more on the trade-offs between local and cloud, see cloud vs. local backup. For the threat model the immutable clause addresses, see our explainers on ransomware and drive failure.

When the rule doesn’t apply

There are categories of data the rule does not cover, and pretending otherwise has caused real losses.

Ephemeral data — render caches, build artefacts, package dependencies, application caches — is not backup material. It is regenerable from other data, and backing it up wastes storage and slows every snapshot you take. The rule is for the stuff you cannot regenerate.

Sync services are not backups. iCloud Drive, Dropbox, and Google Drive synchronise the same file across devices. If the file is deleted, corrupted, or encrypted, the deletion, corruption, or encryption propagates. Sync is convenience. It is not history, and history is what backup is for. A copy on iCloud that moves when the original moves is not a second copy. It is the same copy in two places.

One-off transfers — handing a drive to a colourist, uploading a deliverable to a festival, emailing a client a master — are deliveries, not backups. Treating them as part of your backup count leads to quiet disasters when the colourist returns the drive, wiped, months later.

The updated rule applies to your durable working data. Know what that is. Back up the first. Leave the second alone.

Closing

The 3-2-1 rule is a good rule. It has lasted for a reason. The reason is that the ideas underneath it — redundancy, independence, separation, verification — are close to timeless, even when the specifics they were written against have moved on.

What you want, in 2026 and beyond, is not to memorise a rule. It is to understand what the rule is trying to protect against: a universe that is happy to eat your work, occasionally on purpose. Three copies across two failure domains, one of them immutable, all of them tested, is what that protection looks like now. The rhyme is a little worse. The survival rate is much better.

If you want the longer arc — foundations, the full catalogue of guides, and how the pieces fit together — start at the complete Mac backup guide for 2026 and work down. The rule is a door. What matters is what you do after you walk through it.

Ready to put this into practice?

14-day trial. No card. Set up in under five minutes.