Bachelor Tech
  • Home
  • Tutorials
  • Tips
  • Portfolio
  • About Jan
  • Contact Jan

10. Set up Monitoring – UpTime Kuma

by Jan Bachelor October 24, 2025

In one of the first steps in this tutorial, we allowed traffic for a network monitoring system. Yet up to this point, we have not set it up. It is important to be aware if one of your node clusters (or your Arbitrator) are down. In this case, I have included a few steps for UpTime Kuma. If you prefer Zabbix or another monitoring system, let me know in the comments and I may include it as well.

  • Connect to one of your nodes in the cluster and add a user with sufficient privileges (in here we assume that your UpTime Kuma runs on the 192.168.8.x subnet – change it to your own!):
mysql -u root -p

CREATE USER 'uptimekuma'@'192.168.8.0/24' IDENTIFIED BY 'your-secure-password';
GRANT PROCESS ON *.* TO 'uptimekuma'@'192.168.8.0/24';
FLUSH PRIVILEGES;
EXIT;

Why are we scripting it from Uptime Kuma? Isn’t there an easier way?

  • Technically, we could just create a TCP monitor for each node and monitor that the MariaDB service is up this way. Yet if a node got out of sync, such a check would not really reveal it.
  • You might also argue why not to use the ‘MySQL / MariaDB’ service monitor. The issue is that depending on your level of MariaDB, the driver layer in UpTimeKuma may have compatibility issues – in my testing, I have come across several combinations when it did not work. In addition, it will only work if you are using TLS, as at the time of writing this article, there is no way to tick a box to trust self-signed certs.
  • So the script below covers two things – firstly it runs a TCP scan just like the built-in TCP scan would. And only if it passes, it will use the ‘mariadb-client’ package to connect to each node to check that it reports as in sync. This way, you will just have one monitor per node (and thus only one notification instead of multiple) and this notification will contain the required detail on what is not working.
  • Connect to your UpTimeKuma instance via SSH and run the following commands:
sudo apt update
sudo apt install -y mariadb-client

nano /usr/local/bin/check_galera.sh

#!/bin/bash

# --- ARGUMENTS ---
# The Galera node IP to check is passed as the first argument
DB_HOST=$1
# The Uptime Kuma push code is passed as the second argument
KUMA_CODE=$2

# --- CONFIGURATION ---
# The base URL for your Uptime Kuma instance - change the IP to your own
UPTIME_KUMA_BASE_URL="<http://1.2.3.4:3001>"
DB_USER="uptimekuma"
DB_PASS="your-secure-password"
# ---------------------

if [ -z "$DB_HOST" ] || [ -z "$KUMA_CODE" ]; then
    echo "Usage: $0 <database_host_ip> <uptime_kuma_push_code>"
    exit 1
fi

# Construct the full push URL dynamically
UPTIME_KUMA_URL="${UPTIME_KUMA_BASE_URL}/api/push/${KUMA_CODE}"

# --- STEP 1: Check if the TCP port is open ---
if ! nc -z -w 3 "$DB_HOST" 3306 > /dev/null 2>&1; then
    # If nc fails, the port is down. Report and exit.
    curl -fsS --retry 3 "${UPTIME_KUMA_URL}?status=down&msg=Port_3306_Down&ping=" > /dev/null
    exit 0 # Exit cleanly since we successfully reported the status
fi

# --- STEP 2: If the port is open, proceed to check the Galera status ---
export MYSQL_PWD="$DB_PASS"
if mysql --skip-ssl -h "$DB_HOST" -u "$DB_USER" -e "SHOW STATUS LIKE 'wsrep_local_state_comment';" | grep -q "Synced"; then
    # If grep finds "Synced", the node is healthy. Push an UP status.
    curl -fsS --retry 3 "${UPTIME_KUMA_URL}?status=up&msg=Synced&ping=" > /dev/null
else
    # If grep does not find "Synced", the node is not healthy. Push a DOWN status.
    curl -fsS --retry 3 "${UPTIME_KUMA_URL}?status=down&msg=Not_Synced&ping=" > /dev/null
fi
unset MYSQL_PWD

# Save and exit and make the script executable
sudo chmod +x /usr/local/bin/check_galera.sh
  • Then create a separate “Push” monitor in Uptime Kuma for each node to get a unique URL for each cron line.
Create a (passive) push in Uptime Kuma
Create a (passive) push in Uptime Kuma
  • And lastly, edit your crontab and re-use the same script to run for each node, passing its IP address and the unique push code.
# Edit crontab
sudo crontab -e

# Check Galera nodes every minute (replace your IP addresses with yours
# or use hostnames + add the respective Uptime Kuma passive push code 
# from the generated URLs.
* * * * * /usr/local/bin/check_galera.sh 192.168.8.71 kuma_push_code1
* * * * * /usr/local/bin/check_galera.sh 192.168.8.72 kuma_push_code2
* * * * * /usr/local/bin/check_galera.sh 192.168.8.73 kuma_push_code3
* * * * * /usr/local/bin/check_galera.sh 192.168.8.74 kuma_push_code4
  • The result in UpTimeKuma is as follows:
Once all Maria DB Galera cluster monitors have been added to UptimeKuma
Once all Maria DB Galera cluster monitors have been added to UptimeKuma

Test that the alerts work

  • Connect to one of your nodes and simulate a failure, such as by running sudo systemctl stop mariadb .
  • When I disabled mariadb service on node2, here is the error specifying what went wrong:
Result of a test - notification arrives to Discord (pre-configured on Uptimekuma separately from this guide) when MariaDB service on node2 is down.
Notification from UptimeKuma on a Discord Server after disabling MariaDB service on node2 (a test)
  • When I switched it back on (by running systemctl start mariadb), I got another notification:
Notification of service recovery from UptimeKuma to Discord.
When the mariadb service was switched back on the node
  • So now when we have a working cluster ideally with an Arbitor that is separate to the other physical servers & we have monitoring in place, it is time to start adding some data to our cluster and entrust it with data!

9. Understanding Quorum - When the Raspberry Pi comes in
11. Entrust your Galera Cluster with data
Go back to course overview: Deploy MariaDB Galera Cluster on Proxmox

Leave a Comment Cancel Reply

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 FacebookWhatsappEmail

Course Steps

  1. 1. Galera Cluster - what is it and why is it good for your home set up
  2. 2. Create your first container as a template on Proxmox
  3. 3. Create an SST user on your template container
  4. 4. Security Hardening & Logging on Galera template LXC
  5. 5. Turn the MariaDB container to a template - full vs linked clones?
  6. 6. Deploy 4x LXC containers on 2x Proxmox nodes
  7. 7. Configure Galera on each CT
  8. 8. Configure HAPRoxy for your Galera cluster
  9. 9. Understanding Quorum - When the Raspberry Pi comes in
  10. 10. Set up Monitoring - UpTime Kuma
  11. 11. Entrust your Galera Cluster with data
  12. 12. Beyond the set up - recovery & self-healing options

Other courses

Create an automated Gravity workflow that will allow...

January 19, 2024

Dynamically Populate Gravity Forms from Google Sheets (GSheets...

March 16, 2021

Concur Alternative: Import Employees’ Credit Card Expenses to...

January 19, 2024

Turn your Raspberry Pi into a Proxmox Backup...

July 13, 2025

Install iRedMail Mail Server As Proxmox VM With...

October 31, 2024

Recent Posts

  • How to get LXC containers for Proxmox outside of ready-made templates

  • How to join two Proxmox nodes into a cluster (PVE 8.x)

  • How to Rename a Proxmox Node

Facebook Twitter Instagram Pinterest Linkedin Youtube

All Rights Reserved. The theme was customized from Soledad, see PenciDesign

Bachelor Tech
  • Home
  • Tutorials
  • Tips
  • Portfolio
  • About Jan
  • Contact Jan