PostgreSQL Partitioning for Zabbix
This is the declarative partitioning implementation for Zabbix history*, trends*, and auditlog tables on PostgreSQL. This solution is intended to replace standard Zabbix housekeeping for the configured tables. Partitioning is very useful for large environments because it completely eliminates the housekeeper from the process. Instead of huge DELETE queries on several million rows, fast DDL queries (ALTER TABLE) are executed, which drop an entire partition.
Warning
- Data Visibility: After enabling partitioning, old data remains in
*_oldtables and is NOT visible in Zabbix. You must migrate data manually if needed.- Disable Housekeeping: You MUST disable Zabbix Housekeeper for History and Trends in Administration -> Housekeeping.
Table of Contents
- Architecture
- Installation
- Configuration
- Maintenance
- Monitoring & Permissions
- Implementation Details
- Upgrades
Architecture
The solution uses PostgreSQL native declarative partitioning (PARTITION BY RANGE).
All procedures, information, statistics and configuration are stored in the partitions schema to maintain full separation from Zabbix schema.
Components
- Configuration Table:
partitions.configdefines retention policies. - Maintenance Procedure:
partitions.run_maintenance()manages partition lifecycle. - Monitoring View:
partitions.monitoringprovides system state visibility. - Version Table:
partitions.versionprovides information about installed version of the partitioning solution.
Installation
The installation is performed by executing the SQL procedures in the following order:
- Initialize schema (
00_schema_create.sql). - Install maintenance procedures (
01_maintenance.sql). - Enable partitioning on tables (
02_enable_partitioning.sql). - Install monitoring views (
03_monitoring_view.sql).
Command Example:
You can deploy these scripts manually against your Zabbix database using psql. Navigate to the procedures/ directory and run:
# Connect as the zabbix database user
export PGPASSWORD="your_zabbix_password"
DB_HOST="localhost" # Or your DB endpoint
DB_NAME="zabbix"
DB_USER="zbxpart_admin"
for script in 00_schema_create.sql 01_maintenance.sql 02_enable_partitioning.sql 03_monitoring_view.sql; do
echo "Applying $script..."
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -f "$script"
done
Configuration
Partitioning policies are defined in the partitions.config table.
| Column | Type | Description |
|---|---|---|
table_name |
text | Name of the Zabbix table (e.g., history, trends). |
period |
text | Partition interval: day, week, or month. |
keep_history |
interval | Data retention period (e.g., 30 days, 12 months). |
future_partitions |
integer | Number of future partitions to pre-create (buffer). Default: 5. |
last_updated |
timestamp | Timestamp of the last successful maintenance run. |
Modifying Retention
To change the retention period for a table, update the configuration:
UPDATE partitions.config
SET keep_history = '60 days'
WHERE table_name = 'history';
Maintenance
The maintenance procedure partitions.run_maintenance() is responsible for:
- Creating future partitions (current period +
future_partitionsbuffer). - Creating past partitions (backward coverage based on
keep_history). - Dropping partitions older than
keep_history.
This procedure should be scheduled to run periodically (e.g., daily via pg_cron or system cron).
CALL partitions.run_maintenance();
Scheduling Maintenance
To ensure partitions are created in advance and old data is cleaned up, the maintenance procedure should be scheduled to run automatically.
It is recommended to run the maintenance twice a day and not in round hours because of the way housekeeper works (e.g., at 05:30 and 23:30).
- Primary Run: Creates new future partitions and drops old ones.
- Secondary Run: Acts as a safety check. Since the procedure is idempotent (safe to run multiple times), a second run ensures everything is consistent if the first run failed or was interrupted.
You can schedule this using one of the following methods:
Option 1: pg_cron (Recommended)
pg_cron is a cron-based job scheduler that runs directly inside the database as an extension. It is very useful for cloud based databases like AWS RDS, Aurora, Azure, GCP, because it handles the authentication/connections securely for you automatically and its available as a managed extension. You do not need to install OS packages or configure anything. Simply modify the RDS Parameter Group to include shared_preload_libraries = 'pg_cron' and cron.database_name = 'zabbix', reboot the instance, and execute CREATE EXTENSION pg_cron;.
Setup pg_cron (Self-Hosted):
- Install the package via your OS package manager (e.g.,
postgresql-15-cronon Debian/Ubuntu, orpg_cron_15on RHEL/CentOS). - Configure it modifying
postgresql.conf:shared_preload_libraries = 'pg_cron' cron.database_name = 'zabbix' - Restart PostgreSQL:
systemctl restart postgresql - Connect to your
zabbixdatabase as a superuser and create the extension:CREATE EXTENSION pg_cron; - Schedule the job to run:
SELECT cron.schedule('zabbix_partition_maintenance', '30 5,23 * * *', 'CALL partitions.run_maintenance();');
⚠️ Troubleshooting pg_cron Connection Errors:
If your cron jobs fail to execute and you see FATAL: password authentication failed in your PostgreSQL logs, it is because pg_cron attempts to connect via TCP (localhost) by default, which usually requires a password.
Solution A: Use Local Unix Sockets (Easier)
Edit your postgresql.conf to force pg_cron to use the local Unix socket (which uses passwordless peer authentication):
cron.host = '/var/run/postgresql' # Or '/tmp', depending on your OS
(Restart PostgreSQL after making this change).
Solution B: Provide a Password (.pgpass)
If you must connect via TCP with a specific database user and password, the pg_cron background worker needs a way to authenticate. You provide this by creating a .pgpass file for the OS postgres user.
- Switch to the OS database user:
sudo su - postgres - Create or append your database credentials to
~/.pgpassusing the formathostname:port:database:username:password:echo "localhost:5432:zabbix:zabbix:my_secure_password" >> ~/.pgpass - Set strict permissions (PostgreSQL will ignore the file if permissions are too loose):
chmod 0600 ~/.pgpass
Option 2: Systemd Timers
Systemd timers provide better logging and error handling properties than standard cron.
-
Create a service file
/etc/systemd/system/zabbix-partitions.service:[Unit] Description=Zabbix PostgreSQL Partition Maintenance After=network.target postgresql.service [Service] Type=oneshot User=postgres ExecStart=/usr/bin/psql -d zabbix -c "CALL partitions.run_maintenance();" -
Create a timer file
/etc/systemd/system/zabbix-partitions.timer:[Unit] Description=Run Zabbix Partition Maintenance Twice Daily [Timer] OnCalendar=*-*-* 05:30:00 OnCalendar=*-*-* 23:30:00 Persistent=true [Install] WantedBy=timers.target -
Enable and start the timer:
systemctl daemon-reload systemctl enable --now zabbix-partitions.timer
Option 3: System Cron (crontab)
Standard system cron is a simple fallback.
Example Crontab Entry (crontab -e):
# Run Zabbix partition maintenance twice daily (5:30 AM and 5:30 PM)
30 5,23 * * * psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();" >> /var/log/zabbix_maintenance.log 2>&1
Docker Environment: If running in Docker, you can execute it via the host's cron by targeting the container:
30 5,23 * * * docker exec zabbix-db-test psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();"
Managing pg_cron Jobs
If you are using pg_cron for scheduling, you can verify and manage your jobs (run as superuser):
- To list all active schedules:
SELECT * FROM cron.job; - To view execution logs/history:
SELECT * FROM cron.job_run_details; - To remove/unschedule the job:
SELECT cron.unschedule('zabbix_partition_maintenance');
Monitoring & Permissions
System state can be monitored via the partitions.monitoring view. It includes the information about number of future partitions and the time since the last maintenance run. Plus it includes the total size of the partitioned table in bytes.
SELECT * FROM partitions.monitoring;
Versioning
To check the installed version of the partitioning solution:
SELECT * FROM partitions.version ORDER BY installed_at DESC LIMIT 1;
Least Privilege Access (zbxpart_monitor)
For monitoring purposes, it is highly recommended to create a dedicated user with read-only access to the monitoring view instead of using the zbxpart_admin owner account.
CREATE USER zbxpart_monitor WITH PASSWORD 'secure_password';
GRANT USAGE ON SCHEMA partitions TO zbxpart_monitor;
GRANT SELECT ON partitions.monitoring TO zbxpart_monitor;
Warning
Because
03_monitoring_view.sqluses aDROP VIEWcommand to apply updates, re-running the script will destroy all previously assignedGRANTpermissions. If you ever update the view script, you must manually re-run theGRANT SELECTcommand above to restore access for thezbxpart_monitoruser!
Implementation Details
auditlog Table
The standard Zabbix auditlog table has a primary key on (auditid). Partitioning by clock requires the partition key to be part of the primary key.
To prevent placing a heavy, blocking lock on an auditlog table to alter its primary key, the enablement script (02_enable_partitioning.sql) detects it and handles it exactly like the history tables: it automatically renames the live, existing table to auditlog_old, and instantly creates a brand new, empty partitioned auditlog table pre-configured with the required (auditid, clock) composite primary key.
Converting Existing Tables
The enablement script guarantees practically zero downtime by automatically renaming the existing tables to table_name_old and creating new partitioned tables matching the exact schema.
- Note: Data from the old tables is NOT automatically migrated to minimize downtime.
- New data flows into the new partitioned tables immediately.
- Old data remains accessible in
table_name_oldfor manual lookup or migration if required.
Upgrades
When upgrading Zabbix:
- Backup: Ensure a full database backup exists.
- Compatibility: Zabbix upgrade scripts may attempt to
ALTERtables. PostgreSQL supportsALTER TABLEon partitioned tables for adding columns, which propagates to partitions. - Failure Scenarios: If an upgrade script fails due to partitioning, the table may need to be temporarily reverted or the partition structure manually adjusted.