Duplicity backups to Google Cloud Storage via S3 stopped working after Debian upgrade
The solution I’ve written about before uses Amazon AWS. I’ve since moved my backups from Amazon to Google Cloud Storage. Honestly, I wanted to move them away from big tech entirely, but I wasn’t able to find a storage provider in the EU with S3 compatibility and the features I needed (append-only access & lifecycle deletion). But that’s a story for a different time.
Duplicity accesses my backups through Google’s S3 interoperability access. This ensures that I need to make minimal configuration changes if I want to move from Amazon to Google (or away to another provider in the future).
However, after my recent system upgrade to Debian 13 (trixie), duply and duplicity stopped working. They would throw cryptic error messages about HeadBucket operations that were Forbidden.
What’s going on?
As part of the upgrade to Debian 13, duplicity gets upgraded to version 3.0.4, which uses the boto3 library instead of the boto library. On the duplicity man page, the section ‘A NOTE ON AMAZON S3’ informs us of an important change:
The boto3 backend only supports newer domain style buckets. Amazon is moving to deprecate the older bucket style, so migration is recommended.
You know who is not moving to deprecate the older bucket style? Google.
When you run duplicity, it interprets the request as going to one of Amazon’s domain-style buckets. But the domain name in the URL of our bucket is storage.googleapis.com. It tries to read from that domain as if it’s a bucket, and it gets a HeadBucket operation forbidden error - because we’re not allowed to read from the top level of Google’s storage API. Fair enough.
What duplicity fails to mention, is that this means that you can no longer use Google Cloud Storage as a backend through S3 access. It might be possible to turn your GCS bucket into a domain-style bucket using a custom domain. I have not tried this, because it seemed like more trouble than it’s worth.
Using rclone as the duplicity backend
Luckily, duplicity also supports rclone as a backend, and rclone does support S3 access to Google Cloud Storage. Simple enough, right?
To save you some time:
- The rclone binary you get through apt is version 1.60, which does not yet support GCS access through S3.
- The rclone binary you get from the snap store is the right version (mine was 1.73), but due to snap confinement, it cannot access the /tmp folder, where duplicity tries to store temporary files by default. This yields another cryptic error message from duplicity, about not being able to read a newly downloaded file from /tmp.
- The solution I found is to manually download and install a recent version of rclone from the project website.
After you’ve installed rclone, configure it, either manually in rclone.conf or through rclone config. I ended up with this configuration:
[s3-gcs-backup]
type = s3
provider = GCS
env_auth = true
endpoint = https://storage.googleapis.com
I prefer to keep my GCS authentication credentials in my duply config, because I’m already used to securing that. By using env_auth = true, it reads those credentials from the environment variables that duply sets.
Now, you need to update your duply config, specifically the TARGET variable:
TARGET=rclone://s3-gcs-backup:/your-bucket-name/rest/of/the/path
You can test your new setup by running duply <yourbackup> status and duply <yourbackup> list.