Most backup advice is written for someone whose working set fits on the internal drive of their Mac. The photographer’s working set does not. A single wedding shoot leaves the card reader with 80 to 140 GB of RAW files. A three-camera music-video pickup doubles that. A portrait session that ends on 16-bit TIFF for print delivery will hand you files of 60 to 120 MB each before compositing. Before the year is out you are maintaining a 2 TB active Lightroom catalog on fast SSD, eight or ten external drives of closed shoots, and a NAS or a stack of platter drives holding the five-year archive. Nothing about the standard backup story quite fits.
The problem is not “back up my Mac.” Your Mac is the smallest piece. The problem is keeping the catalog, the active shoots, the delivered finals, and the archive in a topology where a single drive failure, a single ransomware event, or a single theft on a shoot day does not end a career. It is also, bluntly, the problem of keeping that topology honest after the hundredth shoot, when you are tired, when the margin of a job got eaten by travel, when the instinct is to skip the import-day redundancy step because it worked last time.
This guide is the strategy we would run if we shot for a living. It assumes you know Lightroom, you know what a .lrcat lock is, you have opinions about DNG versus proprietary RAW, and you have already argued with someone about smart previews. It sits alongside the complete guide to Mac backup but narrows the lens to a photographer’s specific failure modes and working rhythm.
The photographer’s storage topology
A typical working setup has three layers. The first is the Mac internal drive, usually 1 to 2 TB on a recent Apple silicon machine. This is where Lightroom Classic lives, where the catalog lives, where Capture One or Photo Mechanic ingest temporarily, and where current-week previews and cache sit. It is fast, it is local, and you do not have enough of it.
The second layer is the shoot drives. External SSDs or Thunderbolt NVMe enclosures carrying the RAW files for the last six to twelve weeks. A 4 TB Samsung T7 or an 8 TB SanDisk Pro-G40 is the common shape. The catalog sits on the Mac, the RAW files sit externally, and previews or smart previews keep culling and editing possible when the drives are detached. This layer moves the most and is the most likely to be in transit, in a pelican case on an aeroplane, or on a desk next to a cup of coffee that is one bump away from catastrophe.
The third layer is the archive. Organised by year, by client, or by project, the archive is the accumulated back-catalogue of closed work. It lives on spinning platters — a pair of 20 TB drives in a Drobo or Synology NAS, or on individually labelled externals on a shelf. In practice it is cold storage; you touch it only when a client returns for a re-edit or you need a frame for a portfolio refresh. Once a year closes, you rarely overwrite. You only add.
The backup strategy has to respect this three-layer topology. Treating all three layers the same is how photographers end up with a NAS duplicating eight years of closed archive to the cloud every night while the active catalog on the Mac is protected once a week by a Time Machine drive that has been unplugged for a month. The catalog and the active shoots change daily and carry the most risk. The archive is valuable, but stable. The strategy has to notice the difference.
What is actually at risk
When photographers think about data loss they picture a drive dying and RAW files disappearing. That is a real risk, but it is not the worst one. The worst one is losing the catalog.
The RAW files, in the end, are the source. If you have the RAW files and a backup of your XMP sidecars, you can rebuild the develop settings. The catalog is where years of keywords, star ratings, colour labels, collections, stacks, virtual copies, print layouts, and map-module location data live. It is where the narrative of your archive lives. A photographer with 400,000 catalog entries whose .lrcat is lost or corrupted has not lost their photographs — but has lost the ability to find any one of them. Navigability is the real asset. Recoverability without navigability is a warehouse with the lights off.
Losing the RAW files is expensive. Losing the catalog is career-ending. Back up the catalog like it is the thing you cannot replace, because it is.
Smart previews are a partial mitigation. A smart preview is a small DNG that keeps the edit history alive even when the source RAW is detached. A catalog restored from backup alongside its smart previews bundle lets you triage and continue editing before the RAW archive is back online. They do not substitute for originals, but they substitute, for a while, for the archive’s availability — which is often the thing you need on a deadline.
Lightroom-specific quirks
Lightroom Classic writes its catalog to a SQLite database in a single .lrcat file. While the app is open it holds a write lock and maintains a .lrcat-journal alongside. A naive backup tool that copies the .lrcat mid-transaction produces a torn file that will not open, or worse, opens and silently corrupts on first write. This is the single most common way photographer backups fail on restore day — the file is there, the backup tool reported success, and nothing opens.
macup handles this by detecting the .lrcat-journal. When it is present the agent waits for the session to close or for the journal to clear before snapshotting the catalog. If you keep Lightroom open all day, the RAW files and XMP sidecars are captured continuously, and the catalog itself is snapshotted during the first quiet window — typically overnight when the app is closed or idle. You never get a torn catalog in your backup set.
The Previews.lrdata and Smart Previews.lrdata bundles are excluded by default. These are rebuildable from the RAW originals, and including them would dominate the destination — a working photographer routinely has 40 to 80 GB of previews. The XMP sidecars, on the other hand, are included by default. They are small, they carry edits that round-trip with Bridge or Photoshop, and they are often missed by tools that filter by extension. Backing up RAWs and XMP sidecars together, with the catalog snapshotted at rest, is the minimum definition of “backed up” for a Lightroom workflow. The Lightroom workflow page goes deeper on the per-file behaviour.
Capture One users have a structurally similar problem. The session or catalog file is locked while the app is open, the preview cache is large and regeneratable, and edits are written into a sidecar structure alongside the image. The same strategy applies: snapshot at rest, continuously back up the RAWs and sidecars, skip the previews.
Protecting shoot drives in the field
Shoot days are where the strategy gets lived in. The card comes out of the camera and goes into a reader. Import runs. At that moment the RAW files exist in one place: on the card and on the shoot drive. This is the window of highest risk in the entire workflow, because a dropped drive or a failed SSD takes the day’s work with it.
The working practice is card duplication and import-phase redundancy. Either ingest the same cards into two drives in parallel (Photo Mechanic and ShotPut Pro both handle this natively), or accept a slower workflow in which you do not wipe cards until the shoot drive has landed in a second location. On the road, that second location is often a portable destination — a second SSD in a different bag, or a fast Thunderbolt drive carried as insurance. At home it is the first sync to the archive layer and, in parallel, the first upload to your cloud destination.
Treat shoot drives as impermanent until the RAW files have reached two archive locations. One copy on a shoot drive is not a copy; it is the original. Two copies on two drives in the same bag is not two locations; it is one location. A copy on a shoot drive in your hotel room and a copy in macup Cloud is two locations. Wiping the card should require two green lights, not one. Drive failure is more common on travel days than any other day — the jostle, the temperature changes, the hand luggage pressure — and import-phase redundancy is specifically what prevents travel-day failure from turning into a career event.
Archive strategy
Closed years are easier. Once a year is done and client work has been delivered and invoiced, the archive for that year should be frozen. Organise by year, by client, by project, whichever matches the way you think — but commit to the organisation and then stop changing it. The purpose of the archive is to be retrievable, and a retrievable archive is one with stable paths.
The archive splits cleanly into hot and cold. Hot storage is what you want for recent closed years — say, the last two — where a re-edit request means you want retrieval in seconds, not hours. Cold storage is what you want for older years where retrievals are rare and cost matters more than latency. The correct split depends on how often your older work is revisited. A wedding photographer’s three-year-old archive gets touched maybe twice a year; a commercial photographer’s might get touched weekly as campaigns recycle. Run the numbers on your own revisit rate, and use our cost calculator to size the cloud portion against your terabytes.
A frozen year gets two copies on two different physical drives in two different locations, plus the cloud copy. That is the minimum. The physical drives should not be the same make and model — drives from the same production batch fail in correlated ways — and the two locations should not share the same power, the same roof, or the same burglar. A safety deposit box is one option. A trusted colleague’s studio across town is another.
Client deliverables
Delivered work is its own problem. When a client receives a gallery, a print-ready TIFF, or a final retouched PSD, that asset has a different lifecycle from the source RAW. It has been paid for. It has been signed off. It is the version the client will refer to two years later when they ask for a re-edit for a reprint, and it is the version your contract likely requires you to reproduce on demand.
Keep delivered finals separate from the working archive. A flat structure organised by client and date — Client Name / 2026-03 / Finals / — is easier to search in three years than anything nested inside a Lightroom collection. Keep the delivered gallery as the client saw it, the flattened print files, and a short text note with the delivery date. Back this folder up with the same continuous protection as your catalog, not the slower cadence of the archive; re-edits come in unpredictably, and you want snapshot history on the delivered files so an accidental overwrite during a follow-up session is recoverable.
Version history is specifically why cloud backup matters for client work. Local archive drives give you the current copy. They do not give you the copy from six months ago, before you touched that folder to grab a frame for a portfolio refresh and inadvertently overwrote the master. Scrubbing back to the Tuesday before is what cloud snapshots are for. Our cloud versus local guide goes deeper on where each strategy earns its keep.
A concrete setup we would recommend
Here is what we would run for a photographer with a 2 TB active set on the Mac, a 12 TB closed archive, and shoot volumes of 200 to 400 GB a week.
First, continuous cloud protection of the catalog, the active Lightroom library, the XMP sidecars, and the current-quarter shoot drives, running to macup Cloud. The catalog snapshots at rest when Lightroom is closed. The RAWs and XMPs stream continuously. Encryption happens on the Mac before upload; the cloud destination never sees a plaintext file, which is what zero-knowledge means in practice. Previews and smart previews are excluded. Retention is thirty daily, fifty-two weekly, sixty monthly — enough to scrub back two months at daily granularity, a year at weekly, and five years at monthly.
Second, a mirror to a fast external drive of the same active set, refreshed continuously while the drive is attached. This is the restore-in-minutes path for the day your Mac logic board dies two hours before a shoot. A 4 TB SSD on the desk, running a macup external-drive destination, is the difference between a two-hour crisis and a two-day one.
Third, the annual frozen archive on two physically separate spinning drives — one on the shelf in the studio, one at a relative’s house or in a safety deposit box — plus a cold-tier cloud copy of the same data. The two physical drives are re-verified once a year by running a read-through and a checksum compare. Platter drives sitting on shelves suffer silent bit-rot; the annual verify is how you catch it before a client asks for a 2019 re-edit and you discover three bad sectors.
Total cost for a photographer at this scale lands in the range of forty to seventy dollars a month for the cloud portion depending on archive size, plus the one-time cost of the physical drives. That is a line item against a single re-edit fee on a single good client.
The failure modes we see in photographer practice
External drive failure on a shoot day is the most common event. The SSD that has been fine for eighteen months picks the moment you are on location to throw a controller error. The answer is import-phase redundancy: the day’s RAWs should not be on one drive ever, even for an hour.
Laptop drops in airports are the second most common. A MacBook Pro with the day’s catalog open and a shoot drive attached via Thunderbolt goes onto the floor in the security line. The catalog is fine in the cloud because the agent captured the last quiet window overnight. The shoot drive is fine because you ingested in parallel. You are out a laptop, not a career.
Ransomware encrypting the home folder is rarer but nastier. The failure mode is that a naive continuous backup will dutifully capture the encrypted files and overwrite the good copies. This is why snapshot history and immutable cloud storage matter — the cloud copy has a version from before the encryption event, and nothing the malware can do from the Mac can reach back in time to destroy it.
Accidental overwrite of a catalog is the quietest failure and the one photographers notice latest. You work for an afternoon, you hit save, and the catalog you are now saving to is not the one you meant to open — a colleague opened an old one for reference, or you clicked the wrong recent item. By the time you notice, the current catalog has three hours of edits in the wrong file, and the catalog you meant to work on has been silently diverging for months. Snapshot history is what saves you. You scrub back to this morning, pull down the correct catalog, and merge the afternoon’s work in.
The one thing to take away
Photographer backup is not about copying files. It is about maintaining a living topology of a catalog, a set of shoot drives, a client-deliverable archive, and a frozen year-by-year back catalogue — in a way that survives the specific failures of the job. The catalog and the deliverables belong in the cloud with snapshot history. The recent shoots belong on an external SSD you can plug into a loaner Mac. The frozen archive belongs on two physical drives in two physical places, with a cold cloud tier as the disaster copy.
If that sounds like a lot to set up once, it is. The saving grace is that you only set it up once. After that the agent runs, the snapshots accumulate, and the strategy holds through the hundredth shoot the way it held through the first. When something eventually fails — and something will — you will find the catalog from last Tuesday is still there, the shoot drive from the March wedding is still readable, and the delivered gallery from 2023 is still exactly as the client received it.
Start with a fourteen-day trial of macup. Point it at the catalog, point it at the current-quarter shoot drive, and spend one evening getting the topology right. The other ninety-nine shoots will thank you.