Splitting files into variable-sized blocks so small edits move kilobytes, not gigabytes.
Chunking is the process of splitting files into variable-sized blocks so a small edit doesn’t force the engine to re-upload an entire large file.
Naive backup tools treat the file as the unit: change one byte in a 40 GB Lightroom catalog, re-upload 40 GB. Chunking uses a rolling hash — content-defined boundaries — to slice that catalog into thousands of roughly 4 MB pieces, each hashed and stored independently. Edit a single image in the catalog and typically two or three chunks change. The rest are already in the repository and get referenced by hash. The next snapshot uploads a few megabytes, not forty gigabytes.
The “content-defined” part matters. Fixed-size chunking (every 4 MB starting from byte zero) would break the moment you insert or delete bytes near the front of a file — everything downstream shifts, and every chunk looks new to the engine. Rolling-hash boundaries move with the content, so insertions and deletions only affect the chunks around them. This is what makes dedup work on real-world files like video timelines, database dumps, and virtual-machine images.
In macup, chunking runs on your Mac before anything leaves the machine. The engine slices files, hashes each chunk, encrypts the chunks with your key, and only then uploads. The destination — macup Cloud, an external SSD, or an S3 bucket — never sees a whole file. It sees a bag of encrypted, content-addressed blocks.