In order to auto-start the rclone mount on each reboot, we will need to create a custom daemon (service) that will run some commands for us.
- Install the necessary fuse3 drivers by running ‘
apt-get install fuse3
-y’ - Create a mounting directory, e.g. ‘
mkdir /mnt/gcp
’ - Create a daemon file by running ‘
nano /etc/systemd/system/rclone-gcp.service
‘- Consider how much cache space could be used (versus your available space on /root ). The more cache the better but you do not want to prevent Proxmox from working.
- If you have another mountpoint (such as another drive) to use as VFS cache, it is better, as this way you can prevent /root space from being over-utilized. The directive for that is
--cache-dir <mountpoint>
[Unit] Description=Rclone GCP mount Wants=network-online.target After=network-online.target [Service] Type=notify ExecStart=/bin/rclone mount proxmox-gcp:proxmox-backup-bachelor /mnt/gcp \ --config=/root/.config/rclone/rclone.conf \ --allow-non-empty \ --gcs-bucket-policy-only \ --dir-cache-time 1000h \ --vfs-cache-mode writes \ --vfs-cache-max-size 50G \ --vfs-cache-max-age 24h \ --log-file /var/log/rclone-gcp.log \ --log-level INFO ExecStop=/bin/fusermount -uz /mnt/gcp Restart=on-failure RestartSec=10 User=root Group=root [Install] WantedBy=default.target
- To explain what each parameter means:
--allow-non-empty
: Allows to mount over an existing folder with data. While it is usually not recommended, it prevents a situation when if the mount fails, you would be saving your backups on local drive and making it full.--gcs-bucket-policy-only
: Utilize bucket policy rather than IAM and other options – this is the most modern approach in GCP and is endorsed.--dir-cache-time 1000h
: Sets the time to cache directory entries for. 1000h means 1000 hours. This helps improve performance by reducing the number of API calls.--vfs-cache-mode write
: Enables caching of files for writing. Imagine you take a snapshot of a VM that runs critical networking infrastructure services like pfSense , OPNSense or pihole. Unless you have it set up in high availability (HA), for the duration of the snapshot that is being copied straight to the cloud, the VM/container will be unavailable. It is better to do snapshot locally in a cache folder and then copy it over to the cloud storage.--vfs-cache-max-age 24h
: Sets the maximum age of objects in the cache for up to 24 hours.--log-level INFO
: From which level down should the log been recording. The hierarchy is as follows: DEBUG, INFO, NOTICE, ERROR an CRITICAL. Only keep in mind that after some time, the log may get quite large, so once everything is working as expected, it is best to change it to ERROR.
- In order to make sure that the log file does not get too full and there is some log rotation in place, since we are on Ubuntu, let’s utilize its native logrotate app:
- Create a logrotate file:
nano /etc/logrotate.d/rclone-gcp
- Create a logrotate file:
/var/log/rclone-gcp.log { monthly rotate 12 compress delaycompress missingok notifempty create 644 root root }
-
- Then run
logrotate --debug /etc/logrotate.conf
. You should find a record that say something like this:
- Then run
rotating pattern: /var/log/rclone-gcp.log monthly (12 rotations)empty log files are not rotated, old logs are removed considering log /var/log/rclone-gcp.log Creating new state Now: 2024-08-14 09:52 Last rotated at 2024-08-14 09:00 log does not need rotating (log has already been rotated)
- To finish setting up the rclone service and set it up to be auto-started at boot:
systemctl enable rclone-gcp.service
systemctl start rclone-gcp.service
systemctl status rclone-gcp.service
- If you see a failure, then go back to the script to make sure you have not left some blank spaces. You can even test the ExecStart command in your terminal to confirm that it works and to see what error you get. Once you modify the script, make sure to run ‘systemctl daemon-reload’ to save the change to RAM and then you can start the service again.