Some might argue that since Syncthing is not a filesystem and that the sync is asynchronous, the time required for other web nodes to receive the data may not be acceptable. This could be true for cases with high load or when you cannot utilize the sticky option. What are some other options?
Lsyncd (Live Syncing Daemon)
Originally, I intended to write this guide using lsyncd. It is a lightweight tool that uses the Linux kernel’s inotify subsystem to watch for file changes in real-time and immediately triggers rsync to mirror them.
- Pros: Extremely efficient for one-way mirroring (Master -> Slaves). It is ‘closer to real-time’ than a scheduled cron job or
rsyncalone. - Cons: The project is currently in maintenance mode (fewer updates). It is strictly command-line based and can be harder to troubleshoot if the sync gets stuck. It also lacks the robust multi-master conflict resolution that Syncthing handles natively.
Distributed Filesystems (GlusterFS / Ceph)
Critics might argue that because Syncthing is asynchronous (meaning there is a slight delay before file XYZ appears on Server B), it isn’t ‘true’ High Availability. For mission-critical high-traffic sites, using a distributed filesystem like GlusterFS or Ceph is the industry standard.
- Pros: True synchronous writes. When a file is uploaded, it effectively exists on all nodes instantly. You mount a shared drive (
/mnt/gluster) that all web servers read from simultaneously. - Cons: Significantly higher complexity and maintenance overhead. These systems are sensitive to network latency. While they technically can run over a site-to-site VPN, the performance penalty is usually severe compared to Syncthing, which handles slower WAN links gracefully.
DRBD (Distributed Replicated Block Device)
DRBD operates at the block level, sitting underneath the filesystem. You can think of it as ‘Network RAID 1’, mirroring raw data between two drives over the network.
- Pros: extremely fast and reliable data replication, open source.
- Cons: By default, DRBD is often deployed in Active-Passive mode (only one server can mount the drive at a time). To use it Active-Active (where both web servers write to it), you must use a cluster-aware filesystem like OCFS2 or GFS2. This adds a massive layer of complexity (fencing, quorum, STONITH) that usually requires a dedicated storage engineer to maintain properly.
Conclusion
This tutorial showed you how you can clone your existing single web server and configure it for HA with Syncthing. There are certainly other options, yet whichever option you go for needs to be installed, configured and monitored. With Syncthing, you have a handy GUI that you can reach for a quick overview of each job from the point of view of each node.
What we have not covered in this tutorial is how do you deploy code on each web server node (including updates). For example, if you have a WordPress site and apply plugin, theme and WP core updates, how will it propagate to the other web servers, since Syncthing covers only the wp-content folder? The answer here is not to add the entire virtual host to Syncthing, as that could lead to unexpected behavior and data loss. Rather, we can utilize Gitea for this purpose and pull content to each web server automatically once it is pushed from our testing environment. Let me know in the comments below if you are interested in such as guide 😇