Backing up your MariaDB database is crucial for data protection and disaster recovery. This tutorial covers how to perform both logical (mariadb-dump) and physical (Mariabackup) backups, add date/time stamps for organization, and finally, how to store these backups securely on AWS S3 for off-site, durable storage.

1. Prerequisites

Before you begin, make sure you have:

  • MariaDB server installed and running.
  • Root or a privileged user account for MariaDB (or a dedicated backup user with specific permissions).
  • Sufficient disk space on your server for temporary local backup storage.
  • gzip and xbstream utilities (usually pre-installed on Linux, xbstream comes with mariabackup package) for compression/decompression.
  • An AWS Account with an IAM user that has programmatic access (Access Key ID and Secret Access Key) and S3 permissions (e.g., s3:PutObject, s3:GetObject, s3:DeleteObject, s3:ListBucket).
  • cron for scheduling automated backups (usually pre-installed).

2. Configure AWS CLI for S3 Access

The AWS Command Line Interface (CLI) is crucial for interacting with S3 from your server.

Install AWS CLI

On Debian/Ubuntu:

sudo apt update
sudo apt install awscli

On CentOS/RHEL:

sudo yum install awscli

Configure AWS Credentials

Once installed, configure the AWS CLI with your IAM user’s credentials:

aws configure

You’ll be prompted to enter:

  • AWS Access Key ID: AKIAIOSFODNN7EXAMPLE (from your IAM user)
  • AWS Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY (from your IAM user)
  • Default region name: us-east-1 (choose the AWS region for your bucket)
  • Default output format: json (you can just press Enter)

These credentials are saved securely on your server (~/.aws/credentials and ~/.aws/config).

Security Tip: For EC2 instances, consider using IAM Roles instead of directly storing credentials. Attach an IAM role with S3 permissions to the instance, and AWS CLI will automatically use those permissions.

3. Create an S3 Bucket

You’ll need a dedicated S3 bucket to store your backups. Bucket names are globally unique.

You can create an S3 bucket via the AWS Management Console or AWS CLI:

aws s3 mb s3://your-mariadb-backup-bucket-name --region your-aws-region

Replace your-mariadb-backup-bucket-name with your desired unique name and your-aws-region (e.g., us-east-1).

4. Method 1: Using mariadb-dump (Logical Backup)

mariadb-dump generates a .sql file with SQL statements. It’s flexible and human-readable, ideal for smaller databases. We’ll combine local backup with S3 upload.

Backup a Single Database to S3

  1. Create a backup script (e.g., /usr/local/bin/backup_mariadb_single_to_s3.sh):
#!/bin/bash

# --- Configuration ---
DB_USER="your_database_user"
DB_PASS="your_database_password"
DB_NAME="your_database_name" # e.g., my_webapp_db
LOCAL_BACKUP_DIR="/var/backups/mariadb/logical" # Local directory for backups
S3_BUCKET="s3://your-mariadb-backup-bucket-name" # Your S3 bucket name
LOCAL_RETENTION_DAYS=7 # Keep local backups for this many days
S3_RETENTION_DAYS=30   # Keep S3 backups for this many days

# --- Script Logic ---
mkdir -p "${LOCAL_BACKUP_DIR}"

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_LOCAL_FILE="${LOCAL_BACKUP_DIR}/${DB_NAME}_${TIMESTAMP}.sql.gz"
S3_KEY="${DB_NAME}_${TIMESTAMP}.sql.gz" # Object key for S3

echo "Starting mariadb-dump backup of database: ${DB_NAME}"
echo "Local file: ${BACKUP_LOCAL_FILE}"

# Execute mariadb-dump and compress output to local file
mariadb-dump -u "${DB_USER}" -p"${DB_PASS}" "${DB_NAME}" | gzip > "${BACKUP_LOCAL_FILE}"

if [ $? -eq 0 ]; then
    echo "Local backup successful."

    echo "Uploading ${BACKUP_LOCAL_FILE} to S3: ${S3_BUCKET}/${S3_KEY}"
    aws s3 cp "${BACKUP_LOCAL_FILE}" "${S3_BUCKET}/${S3_KEY}"

    if [ $? -eq 0 ]; then
        echo "S3 upload successful!"

        # Clean up old local backups
        echo "Cleaning up local backups older than ${LOCAL_RETENTION_DAYS} days..."
        find "${LOCAL_BACKUP_DIR}" -name "${DB_NAME}_*.sql.gz" -mtime +${LOCAL_RETENTION_DAYS} -delete

        # Clean up old S3 backups (using S3 list and delete)
        echo "Cleaning up S3 backups older than ${S3_RETENTION_DAYS} days in ${S3_BUCKET}..."
        # Note: This shell logic for S3 deletion can be inefficient for many objects.
        # For robust S3 retention, consider using S3 Lifecycle Rules directly on your bucket.
        aws s3 ls "${S3_BUCKET}/${DB_NAME}_" | while read -r line; do
            create_date=$(echo "$line" | awk '{print $1}')
            create_time=$(echo "$line" | awk '{print $2}')
            file_name=$(echo "$line" | awk '{print $4}')

            # Check if the filename matches the expected pattern
            if [[ "$file_name" == "${DB_NAME}_"* ]]; then
                file_epoch=$(date -d "$create_date $create_time" +%s)
                current_epoch=$(date +%s)
                age_seconds=$((current_epoch - file_epoch))
                age_days=$((age_seconds / 86400)) # 86400 seconds in a day

                if (( age_days > S3_RETENTION_DAYS )); then
                    echo "Deleting old S3 backup: ${S3_BUCKET}/${file_name}"
                    aws s3 rm "${S3_BUCKET}/${file_name}"
                fi
            fi
        done
    else
        echo "S3 upload FAILED!"
        exit 1
    fi
else
    echo "Local backup FAILED!"
    exit 1
fi
  1. Make the script executable:
sudo chmod +x /usr/local/bin/backup_mariadb_single_to_s3.sh
  1. Test run the script:
/usr/local/bin/backup_mariadb_single_to_s3.sh

Backup All Databases to S3

Modify the script slightly to use --all-databases.

  1. Create a backup script (e.g., /usr/local/bin/backup_mariadb_all_to_s3.sh):
#!/bin/bash

# --- Configuration ---
DB_USER="your_database_user"
DB_PASS="your_database_password"
LOCAL_BACKUP_DIR="/var/backups/mariadb/logical" # Local directory for backups
S3_BUCKET="s3://your-mariadb-backup-bucket-name" # Your S3 bucket name
BACKUP_PREFIX="all_databases" # Prefix for the backup filename
LOCAL_RETENTION_DAYS=7
S3_RETENTION_DAYS=30

# --- Script Logic ---
mkdir -p "${LOCAL_BACKUP_DIR}"

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_LOCAL_FILE="${LOCAL_BACKUP_DIR}/${BACKUP_PREFIX}_${TIMESTAMP}.sql.gz"
S3_KEY="${BACKUP_PREFIX}_${TIMESTAMP}.sql.gz" # Object key for S3

echo "Starting mariadb-dump backup of all databases."
echo "Local file: ${BACKUP_LOCAL_FILE}"

# Execute mariadb-dump and compress output to local file
mariadb-dump -u "${DB_USER}" -p"${DB_PASS}" --all-databases | gzip > "${BACKUP_LOCAL_FILE}"

if [ $? -eq 0 ]; then
    echo "Local backup successful."

    echo "Uploading ${BACKUP_LOCAL_FILE} to S3: ${S3_BUCKET}/${S3_KEY}"
    aws s3 cp "${BACKUP_LOCAL_FILE}" "${S3_BUCKET}/${S3_KEY}"

    if [ $? -eq 0 ]; then
        echo "S3 upload successful!"

        # Clean up old local backups
        echo "Cleaning up local backups older than ${LOCAL_RETENTION_DAYS} days..."
        find "${LOCAL_BACKUP_DIR}" -name "${BACKUP_PREFIX}_*.sql.gz" -mtime +${LOCAL_RETENTION_DAYS} -delete

        # Clean up old S3 backups
        echo "Cleaning up S3 backups older than ${S3_RETENTION_DAYS} days in ${S3_BUCKET}..."
        # Note: This shell logic for S3 deletion can be inefficient for many objects.
        # For robust S3 retention, consider using S3 Lifecycle Rules directly on your bucket.
        aws s3 ls "${S3_BUCKET}/${BACKUP_PREFIX}_" | while read -r line; do
            create_date=$(echo "$line" | awk '{print $1}')
            create_time=$(echo "$line" | awk '{print $2}')
            file_name=$(echo "$line" | awk '{print $4}')

            if [[ "$file_name" == "${BACKUP_PREFIX}_"* ]]; then
                file_epoch=$(date -d "$create_date $create_time" +%s)
                current_epoch=$(date +%s)
                age_seconds=$((current_epoch - file_epoch))
                age_days=$((age_seconds / 86400))

                if (( age_days > S3_RETENTION_DAYS )); then
                    echo "Deleting old S3 backup: ${S3_BUCKET}/${file_name}"
                    aws s3 rm "${S3_BUCKET}/${file_name}"
                fi
            fi
        done
    else
        echo "S3 upload FAILED!"
        exit 1
    fi
else
    echo "Local backup FAILED!"
    exit 1
fi
  1. Make executable and test run as above. Restoring from mariadb-dump Backup
  2. Download the desired backup file from S3:
aws s3 cp s3://your-mariadb-backup-bucket-name/your_database_name_20250516_135023.sql.gz /tmp/your_database_name_backup.sql.gz
  1. Decompress the backup file:
gunzip < /tmp/your_database_name_backup.sql.gz > /tmp/your_database_name.sql
  1. Restore the database:
mysql -u your_database_user -p your_database_name < /tmp/your_database_name.sql
  • For all databases: mysql -u your_database_user -p < /tmp/all_databases.sql
  • Caution: This will overwrite the existing database.

5. Method 2: Using Mariabackup (Physical Online Backup)

Mariabackup is MariaDB’s official tool for physical “hot” backups. It’s faster for large databases and supports incremental backups. We’ll stream the backup to a compressed local file and then upload it to S3.

Install Mariabackup

On Debian/Ubuntu:

sudo apt update
sudo apt install mariadb-backup

On CentOS/RHEL:

sudo yum install mariadb-backup

Create a MariaDB Backup User

Mariabackup requires specific MariaDB privileges. Connect to your MariaDB server and run:

CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'your_secure_password';
GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;

Perform a Full Backup to S3

  1. Create a backup script (e.g., /usr/local/bin/mariabackup_full_to_s3.sh):
#!/bin/bash

# --- Configuration ---
DB_USER="backup_user"
DB_PASS="your_secure_password"
LOCAL_BACKUP_DIR="/var/backups/mariadb/physical" # Local directory for backups
S3_BUCKET="s3://your-mariadb-backup-bucket-name" # Your S3 bucket name
LOCAL_RETENTION_DAYS=7
S3_RETENTION_DAYS=30
TEMP_MARIABACKUP_DIR="/tmp/mariabackup_temp_dir" # Temporary directory for mariabackup operations

# --- Script Logic ---
mkdir -p "${LOCAL_BACKUP_DIR}"

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_LOCAL_FILE="${LOCAL_BACKUP_DIR}/full_backup_${TIMESTAMP}.xbstream.gz"
S3_KEY="full_backup_${TIMESTAMP}.xbstream.gz" # Object key for S3

echo "Starting Mariabackup to local file: ${BACKUP_LOCAL_FILE}"

# Execute Mariabackup, stream, and compress directly to local file
# Mariabackup needs a target directory even for streaming; it uses it internally.
mkdir -p "${TEMP_MARIABACKUP_DIR}"
mariabackup --backup \
            --target-dir="${TEMP_MARIABACKUP_DIR}" \
            --user="${DB_USER}" \
            --password="${DB_PASS}" \
            --stream=xbstream | gzip > "${BACKUP_LOCAL_FILE}"

# Clean up the temporary directory immediately after streaming
rm -rf "${TEMP_MARIABACKUP_DIR}"

if [ $? -eq 0 ]; then
    echo "Local Mariabackup successful."

    echo "Uploading ${BACKUP_LOCAL_FILE} to S3: ${S3_BUCKET}/${S3_KEY}"
    aws s3 cp "${BACKUP_LOCAL_FILE}" "${S3_BUCKET}/${S3_KEY}"

    if [ $? -eq 0 ]; then
        echo "S3 upload successful!"

        # Clean up old local backups
        echo "Cleaning up local backups older than ${LOCAL_RETENTION_DAYS} days..."
        find "${LOCAL_BACKUP_DIR}" -name "full_backup_*.xbstream.gz" -mtime +${LOCAL_RETENTION_DAYS} -delete

        # Clean up old S3 backups
        echo "Cleaning up S3 backups older than ${S3_RETENTION_DAYS} days in ${S3_BUCKET}..."
        # Note: This shell logic for S3 deletion can be inefficient for many objects.
        # For robust S3 retention, consider using S3 Lifecycle Rules directly on your bucket.
        aws s3 ls "${S3_BUCKET}/full_backup_" | while read -r line; do
            create_date=$(echo "$line" | awk '{print $1}')
            create_time=$(echo "$line" | awk '{print $2}')
            file_name=$(echo "$line" | awk '{print $4}')

            if [[ "$file_name" == "full_backup_"* ]]; then
                file_epoch=$(date -d "$create_date $create_time" +%s)
                current_epoch=$(date +%s)
                age_seconds=$((current_epoch - file_epoch))
                age_days=$((age_seconds / 86400))

                if (( age_days > S3_RETENTION_DAYS )); then
                    echo "Deleting old S3 backup: ${S3_BUCKET}/${file_name}"
                    aws s3 rm "${S3_BUCKET}/${file_name}"
                    fi
            fi
        done
    else
        echo "S3 upload FAILED!"
        exit 1
    fi
else
    echo "Local Mariabackup FAILED!"
    exit 1
fi
  1. Make executable and test run as above. Prepare the Mariabackup for Restoration

Before restoring, a Mariabackup backup needs to be “prepared” to apply transaction logs.

  1. Download the desired backup file from S3:
aws s3 cp s3://your-mariadb-backup-bucket-name/full_backup_20250516_135023.xbstream.gz /tmp/full_backup_20250516_135023.xbstream.gz
  1. Create a temporary directory for extraction:
mkdir -p /tmp/mariadb_restore
  1. Extract the compressed stream:
gunzip -c /tmp/full_backup_20250516_135023.xbstream.gz | xbstream -x -C /tmp/mariadb_restore
  1. Prepare the extracted backup:
mariabackup --prepare --target-dir=/tmp/mariadb_restore

You should see completed OK! in the output.

Restoring from Mariabackup

WARNING: This process will replace your existing MariaDB data. Ensure you have a current backup and understand the implications.

  1. Stop your MariaDB server:
sudo systemctl stop mariadb
  1. Clear the existing MariaDB data directory:
# IMPORTANT: Confirm your datadir path (e.g., in /etc/mysql/my.cnf or /etc/my.cnf.d/)
sudo rm -rf /var/lib/mysql/*

Double-check your datadir path! Deleting the wrong directory can lead to irreversible data loss.

  1. Copy the prepared backup data back to the MariaDB data directory:
mariabackup --copy-back --target-dir=/tmp/mariadb_restore
  1. Fix file permissions: Ensure the copied files are owned by the mysql user and group.
sudo chown -R mysql:mysql /var/lib/mysql
  1. Start your MariaDB server:
sudo systemctl start mariadb
  1. Verify: Check MariaDB logs (/var/log/mysql/error.log or journalctl -u mariadb) and database connectivity to ensure a successful restore.

6. Automating Backups with cron

cron is a time-based job scheduler that automates your backup scripts.

  1. Ensure your chosen backup script (e.g., backup_mariadb_single_to_s3.sh or mariabackup_full_to_s3.sh) is in /usr/local/bin/ and executable.

  2. Open your crontab file for editing:

crontab -e
  1. Add a line for your backup job. For example, for a daily backup at 2:00 AM:
0 2 * * * /usr/local/bin/your_backup_script.sh >> /var/log/mariadb_s3_backup.log 2>&1
  • 0 2 * * *: Runs at 2 AM every day.
  • >> /var/log/mariadb_s3_backup.log 2>&1: Redirects all output (including errors) to a log file for monitoring.
  1. Save and exit the crontab editor. The job will now run automatically.

7. Best Practices

  • Test Your Backups Religiously: A backup is useless if you can’t restore from it. Regularly perform full restores to a test environment to validate your backups and recovery process.
  • Store Backups Off-site: Never store backups on the same server as your database. Use cloud storage (S3, Google Cloud Storage, Azure Blob Storage), a separate backup server, or network-attached storage (NAS).
  • S3 Lifecycle Rules: For robust retention and cost optimization, use S3 Lifecycle Rules directly on your bucket. These rules can automatically move older backups to cheaper storage classes (like S3 Glacier) or expire them after a defined period, which is more efficient than script-based deletion for large datasets.
  • S3 Versioning: Enable S3 bucket versioning to protect against accidental deletions or overwrites. If a file is deleted or replaced, a previous version can be recovered.
  • Encryption: Always enable server-side encryption (SSE-S3) on your S3 bucket for data at rest. Your backup scripts already send data over TLS/SSL to S3, but encryption at rest adds another layer of security.
  • Cross-Region Replication (CRR): For maximum durability and disaster recovery, set up CRR to automatically replicate your S3 bucket to a different AWS region.
  • Monitoring: Set up AWS CloudWatch alarms for S3 bucket metrics (e.g., PutRequests, BucketSizeBytes) and integrate with SNS for notifications on backup success or failure. Also, regularly review your local backup log files.
  • Cost Management: Be mindful of S3 storage costs and data transfer costs. Lifecycle Rules can significantly help optimize costs by moving less-frequently accessed backups to cheaper tiers.
  • IAM Least Privilege: Always follow the principle of least privilege for your IAM user/role. Grant only the necessary S3 permissions.
  • Local Copies: Keep a few recent local copies of your backups. This allows for faster recovery in case of minor issues without needing to download from S3.