Schedule Meeting

a

Installing a MariaDB Galera Cluster on Ubuntu 24.04 | MariaDB Galera pt 1

by | Dec 4, 2025 | MariaDB

Need Help?  Click Here for Expert Support

At Vettabase, we’re starting a new blog series on High Availability (HA) with focus on MariaDB Galera Cluster. This series will be a collection of hands-on guides, each tackling one practical topic: from installation, configuration, and adding or removing nodes, to backups, upgrades, and schema changes.

Our goal is simple: create a complete, practical reference that anyone can follow to deploy and maintain a resilient MariaDB Galera cluster.

Each article will be concise, command-driven, and easy to reproduce on your own servers and this first post covers the foundation: installing a 3-node MariaDB Galera Cluster on Ubuntu 24.04 LTS.

Environment Setup

For this setup, we used three AWS EC2 instances (each t3.micro, Free Tier) running Ubuntu 24.04 LTS. Each host is configured for SSH key–based access and passwordless sudo privileges:

ssh -i <my-ssh-key> ubuntu@<Public IP address>

List of nodes:

  • Galera1: 172.31.2.197
  • Galera2: 172.31.3.237
  • Galera3: 172.31.0.181

Galera Architecture Overview

MariaDB Galera Cluster is a multi-primary virtually synchronous replication system. That means all nodes (called Galera nodes) can accept both reads and writes, and every transaction is replicated to all others in real time.

Key concepts to understand before setup:

  • Cluster: a group of nodes communicating via the gcomm:// protocol.
  • Primary Component: the active group of nodes that can process writes.
  • SST (State Snapshot Transfer): a full data copy from one node to another when a new node joins the cluster.
  • IST (Incremental State Transfer): a sync of only recent changes.
  • Bootstrap: the initial action of starting the first node in a Galera cluster. It creates the primary component and defines the cluster’s initial state. Only one node should ever be bootstrapped. All other nodes must join it.

Galera’s Quorum

Galera Cluster relies on a quorum-based decision system to maintain data consistency and prevent split-brain situations. Quorum means that more than half of the nodes must be online for the cluster to remain operational (Primary Component). If the quorum is lost – for example, if two of three nodes suddenly crash – the remaining node automatically switches to a non-primary state and stops accepting writes to prevent data divergence.
That’s why a 3-node setup is recommended: it guarantees that even if one node fails, the remaining two can still reach quorum and continue processing writes safely. If 3 nodes are not sufficient (which isn’t common), 5 nodes are recommended, so the quorum will consist in 3 nodes.

Installation

The installation process is well documented on the official MariaDB documentation, and we’ll be following those steps here.

Run the following commands on each node:

curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup
Before executing it, verify the checksum to make sure you downloaded the original MariaDB script. Check the current checksum on the MariaDB page above, then run:
checksum=923eea378be2c129adb4d191f01162c1fe5473f1114d7586f096b5f6b9874efe
echo "${checksum} mariadb_repo_setup" | sha256sum -c -
Expected output:
mariadb_repo_setup: OK
Make the script executable:
chmod +x mariadb_repo_setup

Adding the MariaDB 11.8 Repository

At the time of this writing, the latest LTS (Long-Term Support) release of MariaDB at the time of writing is MariaDB 11.8.
Run the following command to add the repository:
sudo ./mariadb_repo_setup --mariadb-server-version="mariadb-11.8"
Expected output:
[info] Checking for script prerequisites.
[info] MariaDB Server version 11.8 is valid
[info] Repository file successfully written to /etc/apt/sources.list.d/mariadb.list
[info] Adding trusted package signing keys...
[info] Running apt-get update...
[info] Done adding trusted package signing keys

Install the Required Packages

Finally, install MariaDB and Galera components:
sudo apt install mariadb-server mariadb-client mariadb-backup galera-4 -y
This installs:
  • mariadb-server: main database engine
  • mariadb-client: client tools (mysql, mariadb, etc.)
  • mariadb-backup: backup utility
  • galera-4: synchronous replication provider for the Galera cluster

Secure the MariaDB Installation

After installing MariaDB and Galera components, it’s recommended to run the built-in hardening script. This will remove anonymous users, disable remote root login, and secure your installation.
sudo mariadb-secure-installation
You’ll be prompted to:
  • Switch to unix_socket authentication – (recommended: Yes)
  • Remove anonymous users – (Yes)
  • Disallow root login remotely – (Yes)
  • Remove test database – (Yes)
  • Reload privilege tables – (Yes)
Once completed, your MariaDB instance will be more secure and ready for Galera configuration.

Configure the First (Bootstrap) Node

We’ll start by configuring the first node. This is the one that will bootstrap the cluster and initialize the primary component. After that, the remaining nodes will simply join it and synchronize automatically.
For reference, our cluster nodes are:
  • Galera1: 172.31.2.197 (bootstrap node)
  • Galera2: 172.31.3.237
  • Galera3: 172.31.0.181

Minimal Configuration

Instead of modifying the default configuration files, it’s a good practice to split the configuration into two separate files:
one for core MariaDB settings, and one for Galera-specific options.
This keeps things clean and easier to manage.
  • /etc/mysql/my.cnf
[mariadbd]
# Basic settings
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
log_error=/var/log/mysql/mariadb.errsocket=/run/mysqld/mysqld.sock

# Innodb
innodb_force_primary_key=1

# Galera settings wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="<your cluster name>"
wsrep_cluster_address="gcomm://<comma separated list of Galera Cluster IPs>"

# Node identity (change per node)
wsrep_node_name=<node name>
wsrep_node_address="<node IP>"

# SST configuration
wsrep_sst_method=mariabackup
wsrep_sst_auth="sstuser:sstpassword"
This layout separates the core database configuration from the cluster logic, making it easier to upgrade MariaDB, manage changes, or temporarily disable Galera (for example, during maintenance).
Galera Parameters Explanation:
  • wsrep_on – enables Galera replication.
  • wsrep_provider – path to the Galera library (libgalera_smm.so), required for replication to function.
  • wsrep_cluster_name – logical name of the cluster; all nodes must use the same name.
  • wsrep_cluster_address – list of all cluster node IPs separated by commas in the format `gcomm://IP1,IP2,IP3`. During bootstrap, this list tells Galera which nodes to contact.
  • wsrep_node_name – a unique name for the node within the cluster.
  • wsrep_node_address – the IP address used for replication traffic.
  • wsrep_sst_method – defines the method for State Snapshot Transfer (SST) – the process of copying full data from one node to another.
    • rsync – simple and easy to configure.
    • mariabackup – preferred for large datasets (non-blocking, hot backup).
  • wsrep_sst_auth – credentials used by the donor node during SST.
    • Format: “username:password”.
    • This user must have privileges: RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT.

Add innodb_force_primary_key=1 to ensure all InnoDB tables have a primary key, as Galera requires PKs for consistent row replication and to prevent write conflicts or data divergence.

Example for galera1:
  • /etc/mysql/my.cnf
[mariadbd]
# Basic settings
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
log_error=/var/log/mysql/mariadb.err
socket=/run/mysqld/mysqld.sock

# Innodb
innodb_force_primary_key=1

# Galera settings
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="vettabase_galera"
wsrep_cluster_address="gcomm://172.31.2.197,172.31.3.237,172.31.0.181"

# Node identity
wsrep_node_name=galera1
wsrep_node_address="172.31.2.197"

# SST configuration
wsrep_sst_method=mariabackup
wsrep_sst_auth="sst_user:sst_password"

Bootstrap the First Node

Once the configuration files are in place on all nodes, we can bootstrap the cluster – this step initializes the very first node and creates the Primary Component of the Galera cluster.
Only one node should ever be bootstrapped. All other nodes will join it automatically.
Run the following command on Galera1:
sudo systemctl stop mariadb
sudo galera_new_cluster
Check that MariaDB is running:
sudo systemctl status mariadb
Then verify the Galera cluster status by running:
mariadb -u root -p -S /run/mysqld/mysqld.sock \
  -e "SHOW GLOBAL STATUS LIKE 'wsrep%'" \
  | grep -E "^wsrep_(cluster_size|cluster_status|local_state_comment|ready)"
Expected output:
wsrep_local_state_comment Synced
wsrep_cluster_size 1
wsrep_cluster_status Primary
wsrep_ready ON
Explanation:
  • wsrep_local_state_comment = Synced: the node is operational and ready.
  • wsrep_cluster_size = 1: the node has formed a cluster.
  • wsrep_cluster_status = Primary: the cluster has quorum.
  • wsrep_ready = ON: the node can accept queries.
If all these values are correct, Galera1 node has been successfully bootstrapped and is ready for other nodes to join.

Create the SST User

Before adding the remaining nodes, we need to create a dedicated user SST. This user will allow the donor node (currently Galera1) to authenticate and send data during SST.
Connect to MariaDB on Galera1:
mariadb -u root -p -S /run/mysqld/mysqld.sock
Then execute the following SQL commands:
CREATE USER 'sst_user'@'%' IDENTIFIED BY 'sst_password';
GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sst_user'@'%';
FLUSH PRIVILEGES;

Once created, the node is fully ready to act as an SST donor and replicate data to other cluster members.

Joining the Remaining Nodes

With the first node bootstrapped and the SST user created, we can now add the remaining nodes (Galera2 and Galera3) to the cluster. These nodes will automatically synchronize with Galera1 using the configured SST method.

Verify Configuration on Each Node

Make sure the configuration file /etc/mysql/my.cnf on Galera2 and Galera3 is correct:
  • The IP list in wsrep_cluster_address contains all three nodes.
  • Each node has its own unique wsrep_node_name and wsrep_node_address.
  • The same wsrep_cluster_name and wsrep_sst_auth credentials are used as on Galera1.
Example for Galera2 (172.31.3.237):
wsrep_node_name=galera2
wsrep_node_address="172.31.3.237"
Example for Galera3 (172.31.0.181):
wsrep_node_name=galera2
wsrep_node_address="172.31.0.181"

Start MariaDB on Each Node

Now start MariaDB one node at a time (first Galera2, then Galera3):
sudo systemctl restart mariadb
When restarting or adding new nodes, you can follow the logs in /var/log/mysql/mariadb.err to monitor the synchronization process. A healthy cluster should show messages similar to these:
WSREP: Server galera1 synced with group
WSREP: Server status change joined -> synced
WSREP: Synchronized with group, ready for connections
WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
WSREP: IST request ... tcp://172.31.3.237:4568
WSREP: 1.0 (galera2): State transfer from 0.0 (galera1) complete.
WSREP: Member 1.0 (galera2) synced with group.
The latest line means your cluster is fully operational. All nodes are synchronized and ready to accept client connections.

Galera Health Check

To confirm that your Galera Cluster is healthy and synchronized across all nodes, run the following command on all nodes:
mariadb -u root -p -S /run/mysqld/mysqld.sock \
  -e "SHOW GLOBAL STATUS LIKE 'wsrep%'" \
  | grep -E "^wsrep_(cluster_size|cluster_status|local_state_comment|ready)"
Expected output:
wsrep_local_state_comment Synced
wsrep_cluster_size 3
wsrep_cluster_status Primary
wsrep_ready ON
wsrep_cluster_size = 3 means all three nodes are connected to the cluster.

Summary

In this first post of our Vettabase High Availability series, we built a fully functional 3-node MariaDB Galera Cluster on Ubuntu 24.04.
We covered everything from installing MariaDB and configuring Galera parameters to bootstrapping the first node and verifying cluster health. At this point, you should have a stable, synchronized cluster.
In the next post, we’ll cover optional deployment of garbd (Galera Arbitrator Daemon). While a 3-node cluster is the recommended and most resilient topology, allowing the cluster to remain operational even if one node is down, garbd can be useful in scenarios where you temporarily need quorum support without running a full additional database node.
Mykhaylo Rykmas

All content in this blog is distributed under the CreativeCommons Attribution-ShareAlike 4.0 International license. You can use it for your needs and even modify it, but please refer to Vettabase and the author of the original post. Read more about the terms and conditions: https://creativecommons.org/licenses/by-sa/4.0/

About Mike Rykmas
Mike Rykmas is a skilled software engineer with years of expertise in database administration, cloud computing, and IT infrastructure. Thanks to his strong background in data management and performance optimization, Mike has successfully led and implemented scalable solutions, managing petabytes of data to meet diverse business needs.

Recent Posts

Query Optimisation: Using indexes for WHERE with multiple conditions

Query Optimisation: Using indexes for WHERE with multiple conditions

You have a beautiful application, but a page is slow. After some investigation, you find out that a query is unexpectedly slow, ruining user experience and causing frustration for you. It is a simple query with a WHERE clause and nothing else. You tried to build an...

Services

Need Help?  Click Here for Expert Support

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *