While Bitwarden offers a robust, enterprise-ready self-hosted option, it is resource-intensive. The official Bitwarden server relies on a complex stack of MSSQL (Microsoft SQL Server) and multiple .NET containers, often requiring 2GB+ of RAM just to idle.
Vaultwarden is a lightweight rewrite of the Bitwarden API in Rust. It is fully compatible with official Bitwarden apps (browser extensions, mobile, desktop) but runs on a fraction of the resources (often <100MB RAM).
Bitwarden offers a free option for individuals. If you have a family or an organization and want to share a collection of passwords, you would need to go on the paid tier. The advantages include convenience (no maintenance on your part) and no missing features.
The obvious pitfall with Vaultwarden is that if you deploy it yourself, you will also need to handle the maintenance. We counter that by using docker containers that can be updated very easily. What takes more effort is to ensure your web and database clusters are up to date and secure as well, which is beyond the scope of this tutorial. In addition, if Bitwarden introduces some flashy new features, you may need to wait till the Rust devs update them for Vaultwarden.
Pre-requisites:
- A web server (or two) on site 1 + 2 running nginx on Debian 12 or newer. Their set up is beyond the scope of this article – you can refer to my previous article on How to configure High Availability for a Web Server using Syncthing and HAProxy (on OPNSense).
- A Galera cluster (MySQL) deployed in HA on your two sites, ideally with a Galera witness on site 3 (all connected via WireGuard). Again, the set up is beyond the scope of this article – you can complete them using previous tutorials titled Deploy MariaDB Galera Cluster on Proxmox and Set up a Galera Witness on Hetzner VPS using Terraform + Ansible (AWX)
- OPNSense (or pfsense or similar) with an HAProxy module to act as a reverse proxy + virtual IP interface for our DB cluster on site 1 and 2. Previously, this was completed in a tutorial called OPNSense in HA with CARP with dual WANs.
Topology:
- Site 1 (Main) – all connected to a UPS managed from an RPI
- OPNSense in HA (192.168.8.254) – CARP 0 – see this guide to set it up.
OPNSense1(192.168.8.1) – Proxmox host 1OPNSense2(192.168.8.2) – Proxmox host 2- Site-to-site VPN WireGuard) with the interface of 10.10.10.1/24
- Web servers in HA – reachable via a shared back-end pool in HAProxy within OPNSense
web1(192.168.8.9) – Proxmox host 1- Syncthing from web1 to web2 & web1 to web3 – to sync user data in near real-time
- Connected to a local Gitea server – to update app data on demand
web2(192.168.8.10) – Proxmox host 2- Syncthing from web2 to web1 & web2 to web3 – to sync user data in near real-time
- Connected to a local Gitea server – to update app data on demand
- Database Cluster – MariaDB Galera cluster – reachable on virtual IP 192.168.8.70 configured in OPNSense
galera1-2(192.168.8.71 + 72) – Proxmox host 1galera3-4(192.168.8.73 + 74) – Proxmox host 2galera-template– a pre-prepared LXC template ready for easy deployment in case a replacement is needed (can be automated with AWX for deployment)
Gitea LXC(192.168.8.21) – to sync app data and deployment scripts + config.xml for each OPNSense host.Uptimekuma1(192.168.8.60) – to monitor all Site 1 hosts’ uptime inc. services like SyncthingRPI(192.168.8.16)- Corosync for HA for the Proxmox cluster) – ensure a host is reachable if one is down
- Proxmox Backup Server with an added drive – for Site 1 + 2.
APCupsdwith scripts for smooth graceful shutdown of Proxmox hosts – see this tutorial for more information.- Can act as a local Galera witness before Site 3 is set up.
- OPNSense in HA (192.168.8.254) – CARP 0 – see this guide to set it up.
- Site 2 (Online Backup)
OPNSense3(192.168.6.1) – Proxmox host 3- Site-to-site VPN (WireGuard) with the interface of 10.10.10.2/24
Web3(192.168.6.10) – Proxmox host 3- Syncthing from web3 to web1 & web3 to web2 – to sync user data in near real-time
- Connected to Site 1’s Gitea server – to update app data on demand
Galera4+5(192.168.6.75 + 76) – Proxmox host 3Uptimekuma2(192.168.6.60) – monitors Site 2 services inc. web2’s Syncthing jobs.
- Site 3 (Witness)
- Galera witness VPS deployed automatically via AWX in Hetzner – see this tutorial on how to achieve that.
- Site-to-site VPN (WireGuard) with the interface of 10.10.10.3/24
Garb– daemon that does not hold data and provides HA if one site is down (quorum voting as +1)Uptimekuma3– monitors uptime for websites that are provided by either site (such as bachelor-tech.com)
- Galera witness VPS deployed automatically via AWX in Hetzner – see this tutorial on how to achieve that.
Software Versions at the time of write-up
For the purpose of reproducibility, as a side note, you can find the versions that I worked with:
- OPNSense:
- Core Version: 25.7.10 (commit: c2f076f30)
- os-haproxy plugin: 4.6_1
- os-acme plugin: 4.11
- os-ddclient plugin: 1.28
- os-git-backup: 1.1_1
- Vaultwarden Docker Image: 1.35.2
- Web servers
- OS: Debian 12 (Bookworm, kernel 6.1.0-26-amd64)
- nginx: 1.28.0
- Docker Engine (client + server): 29.1.3
- Syncthing: 2.0.12 (Hafnium Hornet)
- Fail2ban (server + client): 1.0.2
- Galera cluster:
- OS: Debian 13 (Trixie, kernel 6.17.2-2-pve)
- Maria DB Server: 11.8.3
- Maria DB Client: 15.2
- UptimeKuma: 2.0.0-beta.4
Gotchas with Vaultwarden & Syncthing
There are two specific limitations to be aware of during an Active-Active setup for Vaultwarden:
Vaultwarden tables utilize Primary Keys, which is good (Galera requires them). However, ensure your Galera nodes are not running in ENFORCING mode for wsrep_drupal_282555_workaround or strict checking that might block INSERT statements if they momentarily lack a PK. Most likely, this will not be an issue.
Token Signing Keys: Users will get logged out if they hit web1 but their token was signed by web2 or web3 using a different key. You must ensure the RSA key files in the data directory are identical across all nodes.
We will handle that in this guide by using Syncthing to distribute the RSA key between the web nodes.
WebSockets: Vaultwarden does not currently have a “message bus” (like Redis) to broadcast WebSocket events between nodes. If a user updates a password on web1, a device connected to web2 or web3 won’t get the live update immediately. It will sync the next time the user manually refreshes or performs an action. This is a minor inconvenience but acceptable for most.