Archival storage — cheap to keep, slow and sometimes costly to retrieve from.
Cold storage is a tier of cloud storage priced for data you rarely touch — low monthly cost per gigabyte in exchange for slow retrieval and often per-read fees.
The archetypes are AWS S3 Glacier and Glacier Deep Archive: storage that costs a small fraction of a cent per gigabyte per month, but where retrieving an object can take minutes, hours, or in the deepest tier, up to twelve hours of thaw time. You also pay egress fees on the way out and, on many providers, an early-deletion penalty if you remove an object before a minimum retention window. Cold storage is a good fit for data you are reasonably confident you will not read — compliance archives, a 2015 wedding you last opened in 2019, raw video you finished a year ago and never revisited.
Cold storage is a bad fit for an active backup repository. Backups need to be readable in a hurry, and a backup engine needs to list, verify, and prune objects cheaply — operations that become painful and expensive at glacial tiers.
In macup, cold storage is something you opt into on a BYO-storage destination, not the default. If you point macup at your own S3-compatible bucket and configure a lifecycle rule that migrates old objects to an archival class, you own that decision and the trade-off: slower restores on older snapshots, in exchange for a smaller storage bill.